I0515 21:10:12.059817 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0515 21:10:12.060071 6 e2e.go:109] Starting e2e run "ae7102aa-42e6-4bc2-9215-47ea6105d699" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589577010 - Will randomize all specs Will run 278 of 4842 specs May 15 21:10:12.124: INFO: >>> kubeConfig: /root/.kube/config May 15 21:10:12.126: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 15 21:10:12.145: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 21:10:12.170: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 21:10:12.170: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 15 21:10:12.170: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 15 21:10:12.181: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 15 21:10:12.181: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 15 21:10:12.181: INFO: e2e test version: v1.17.4 May 15 21:10:12.182: INFO: kube-apiserver version: v1.17.2 May 15 21:10:12.182: INFO: >>> kubeConfig: /root/.kube/config May 15 21:10:12.186: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:12.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services May 15 21:10:12.265: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3437 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3437 to expose endpoints map[] May 15 21:10:12.316: INFO: successfully validated that service multi-endpoint-test in namespace services-3437 exposes endpoints map[] (26.398554ms elapsed) STEP: Creating pod pod1 in namespace services-3437 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3437 to expose endpoints map[pod1:[100]] May 15 21:10:16.538: INFO: successfully validated that service multi-endpoint-test in namespace services-3437 exposes endpoints map[pod1:[100]] (4.20881453s elapsed) STEP: Creating pod pod2 in namespace services-3437 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3437 to expose endpoints map[pod1:[100] pod2:[101]] May 15 21:10:20.658: INFO: successfully validated that service multi-endpoint-test in namespace services-3437 exposes endpoints map[pod1:[100] pod2:[101]] (4.114590405s elapsed) STEP: Deleting pod pod1 in namespace services-3437 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3437 to expose endpoints map[pod2:[101]] May 15 21:10:21.683: INFO: successfully validated that service multi-endpoint-test in namespace services-3437 exposes endpoints map[pod2:[101]] (1.019153154s elapsed) STEP: Deleting pod pod2 in namespace services-3437 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3437 to expose endpoints map[] May 15 21:10:22.702: INFO: successfully validated that service multi-endpoint-test in namespace services-3437 exposes endpoints map[] (1.014237168s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:22.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3437" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.872 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":1,"skipped":8,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:23.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 21:10:23.467: INFO: Waiting up to 5m0s for pod "pod-0292da5d-7cb5-4def-9872-36045b17875b" in namespace "emptydir-7991" to be "success or failure" May 15 21:10:23.484: INFO: Pod "pod-0292da5d-7cb5-4def-9872-36045b17875b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.080208ms May 15 21:10:25.503: INFO: Pod "pod-0292da5d-7cb5-4def-9872-36045b17875b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035547714s May 15 21:10:27.506: INFO: Pod "pod-0292da5d-7cb5-4def-9872-36045b17875b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03891272s STEP: Saw pod success May 15 21:10:27.506: INFO: Pod "pod-0292da5d-7cb5-4def-9872-36045b17875b" satisfied condition "success or failure" May 15 21:10:27.508: INFO: Trying to get logs from node jerma-worker pod pod-0292da5d-7cb5-4def-9872-36045b17875b container test-container: STEP: delete the pod May 15 21:10:27.617: INFO: Waiting for pod pod-0292da5d-7cb5-4def-9872-36045b17875b to disappear May 15 21:10:27.642: INFO: Pod pod-0292da5d-7cb5-4def-9872-36045b17875b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:27.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7991" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":9,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:27.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 21:10:31.205: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:31.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2745" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:31.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 15 21:10:31.391: INFO: Waiting up to 5m0s for pod "downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3" in namespace "downward-api-6892" to be "success or failure" May 15 21:10:31.455: INFO: Pod "downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 64.897303ms May 15 21:10:33.459: INFO: Pod "downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068857431s May 15 21:10:35.570: INFO: Pod "downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179707932s STEP: Saw pod success May 15 21:10:35.570: INFO: Pod "downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3" satisfied condition "success or failure" May 15 21:10:35.611: INFO: Trying to get logs from node jerma-worker2 pod downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3 container dapi-container: STEP: delete the pod May 15 21:10:35.707: INFO: Waiting for pod downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3 to disappear May 15 21:10:35.711: INFO: Pod downward-api-deea575f-b629-48df-a3e4-8debda8a4fd3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:35.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6892" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:35.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 21:10:39.845: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:39.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9741" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":84,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:39.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-b57b1d68-e957-4075-9037-334b93b513e7 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:10:40.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8208" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":6,"skipped":159,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:10:40.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 15 21:10:41.399: INFO: Pod name wrapped-volume-race-9e649fd6-0893-4e8e-ba1f-80c2d5261164: Found 0 pods out of 5 May 15 21:10:46.405: INFO: Pod name wrapped-volume-race-9e649fd6-0893-4e8e-ba1f-80c2d5261164: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9e649fd6-0893-4e8e-ba1f-80c2d5261164 in namespace emptydir-wrapper-8359, will wait for the garbage collector to delete the pods May 15 21:11:02.506: INFO: Deleting ReplicationController wrapped-volume-race-9e649fd6-0893-4e8e-ba1f-80c2d5261164 took: 6.169395ms May 15 21:11:02.906: INFO: Terminating ReplicationController wrapped-volume-race-9e649fd6-0893-4e8e-ba1f-80c2d5261164 pods took: 400.244127ms STEP: Creating RC which spawns configmap-volume pods May 15 21:11:19.649: INFO: Pod name wrapped-volume-race-48887a7a-d7ee-4e61-ae9b-4be7ce7bb490: Found 0 pods out of 5 May 15 21:11:24.670: INFO: Pod name wrapped-volume-race-48887a7a-d7ee-4e61-ae9b-4be7ce7bb490: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-48887a7a-d7ee-4e61-ae9b-4be7ce7bb490 in namespace emptydir-wrapper-8359, will wait for the garbage collector to delete the pods May 15 21:11:40.752: INFO: Deleting ReplicationController wrapped-volume-race-48887a7a-d7ee-4e61-ae9b-4be7ce7bb490 took: 6.138837ms May 15 21:11:41.052: INFO: Terminating ReplicationController wrapped-volume-race-48887a7a-d7ee-4e61-ae9b-4be7ce7bb490 pods took: 300.256038ms STEP: Creating RC which spawns configmap-volume pods May 15 21:11:49.995: INFO: Pod name wrapped-volume-race-019a6eac-ecf4-4ee5-8bbf-5f4bceb660d8: Found 0 pods out of 5 May 15 21:11:55.000: INFO: Pod name wrapped-volume-race-019a6eac-ecf4-4ee5-8bbf-5f4bceb660d8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-019a6eac-ecf4-4ee5-8bbf-5f4bceb660d8 in namespace emptydir-wrapper-8359, will wait for the garbage collector to delete the pods May 15 21:12:11.087: INFO: Deleting ReplicationController wrapped-volume-race-019a6eac-ecf4-4ee5-8bbf-5f4bceb660d8 took: 9.235555ms May 15 21:12:11.487: INFO: Terminating ReplicationController wrapped-volume-race-019a6eac-ecf4-4ee5-8bbf-5f4bceb660d8 pods took: 400.271973ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:12:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8359" for this suite. • [SLOW TEST:100.147 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":7,"skipped":164,"failed":0} [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:12:20.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:12:20.604: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 16.402998ms) May 15 21:12:20.607: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.441107ms) May 15 21:12:20.610: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.877587ms) May 15 21:12:20.613: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.395434ms) May 15 21:12:20.617: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.414927ms) May 15 21:12:20.650: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 32.88196ms) May 15 21:12:20.654: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.132724ms) May 15 21:12:20.658: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.086073ms) May 15 21:12:20.662: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.850797ms) May 15 21:12:20.665: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.416318ms) May 15 21:12:20.668: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.50282ms) May 15 21:12:20.671: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.726717ms) May 15 21:12:20.673: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.696919ms) May 15 21:12:20.676: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.652797ms) May 15 21:12:20.679: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.236838ms) May 15 21:12:20.682: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.858481ms) May 15 21:12:20.686: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.395594ms) May 15 21:12:20.688: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.549394ms) May 15 21:12:20.691: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.197004ms) May 15 21:12:20.694: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.642972ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:12:20.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4100" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":8,"skipped":164,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:12:20.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:12:20.886: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 15 21:12:25.898: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 21:12:25.898: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 15 21:12:30.129: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8837 /apis/apps/v1/namespaces/deployment-8837/deployments/test-cleanup-deployment 77a31cd7-c09b-4cda-9f60-355e8c5a0800 16464124 1 2020-05-15 21:12:25 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001eca5e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 21:12:26 +0000 UTC,LastTransitionTime:2020-05-15 21:12:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-05-15 21:12:29 +0000 UTC,LastTransitionTime:2020-05-15 21:12:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 21:12:30.132: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-8837 /apis/apps/v1/namespaces/deployment-8837/replicasets/test-cleanup-deployment-55ffc6b7b6 7e5db4bd-fad5-4a7c-9ebd-b3b9ad40642d 16464110 1 2020-05-15 21:12:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 77a31cd7-c09b-4cda-9f60-355e8c5a0800 0xc002788587 0xc002788588}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027885f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 21:12:30.134: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lsbnq" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lsbnq test-cleanup-deployment-55ffc6b7b6- deployment-8837 /api/v1/namespaces/deployment-8837/pods/test-cleanup-deployment-55ffc6b7b6-lsbnq 06b4a5b7-dac3-4b4c-8258-916f38eaa557 16464109 0 2020-05-15 21:12:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 7e5db4bd-fad5-4a7c-9ebd-b3b9ad40642d 0xc002788967 0xc002788968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cs5ql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cs5ql,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cs5ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:12:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:12:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:12:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.79,StartTime:2020-05-15 21:12:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:12:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1c9bd452f52b75552943d2257232da8471ee3799c90234cfbcc4090fde2c08a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:12:30.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8837" for this suite. • [SLOW TEST:9.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":9,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:12:30.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:12:30.264: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c1ffb526-e1f1-44a1-a29f-52dd51b28078" in namespace "security-context-test-7220" to be "success or failure" May 15 21:12:30.294: INFO: Pod "busybox-readonly-false-c1ffb526-e1f1-44a1-a29f-52dd51b28078": Phase="Pending", Reason="", readiness=false. Elapsed: 30.027796ms May 15 21:12:32.299: INFO: Pod "busybox-readonly-false-c1ffb526-e1f1-44a1-a29f-52dd51b28078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034273678s May 15 21:12:34.302: INFO: Pod "busybox-readonly-false-c1ffb526-e1f1-44a1-a29f-52dd51b28078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037958987s May 15 21:12:34.302: INFO: Pod "busybox-readonly-false-c1ffb526-e1f1-44a1-a29f-52dd51b28078" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:12:34.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7220" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":194,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:12:34.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1513 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1513 STEP: Creating statefulset with conflicting port in namespace statefulset-1513 STEP: Waiting until pod test-pod will start running in namespace statefulset-1513 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1513 May 15 21:12:40.456: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: 6052df01-1d9f-4981-9970-19c911930764, status phase: Pending. Waiting for statefulset controller to delete. May 15 21:12:41.014: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: 6052df01-1d9f-4981-9970-19c911930764, status phase: Failed. Waiting for statefulset controller to delete. May 15 21:12:41.040: INFO: Observed stateful pod in namespace: statefulset-1513, name: ss-0, uid: 6052df01-1d9f-4981-9970-19c911930764, status phase: Failed. Waiting for statefulset controller to delete. May 15 21:12:41.101: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1513 STEP: Removing pod with conflicting port in namespace statefulset-1513 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1513 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 21:12:45.171: INFO: Deleting all statefulset in ns statefulset-1513 May 15 21:12:45.174: INFO: Scaling statefulset ss to 0 May 15 21:12:55.190: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:12:55.193: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:12:55.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1513" for this suite. • [SLOW TEST:20.902 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":11,"skipped":203,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:12:55.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 15 21:12:55.636: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 15 21:12:57.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725173975, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725173975, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725173975, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725173975, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:13:00.783: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:13:00.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:02.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-679" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.078 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":12,"skipped":210,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:02.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 15 21:13:02.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 21:13:02.444: INFO: Waiting for terminating namespaces to be deleted... May 15 21:13:02.446: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 15 21:13:02.463: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:13:02.463: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:13:02.463: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:13:02.463: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:13:02.463: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 15 21:13:02.469: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:13:02.469: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:13:02.469: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 15 21:13:02.469: INFO: Container kube-bench ready: false, restart count 0 May 15 21:13:02.469: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:13:02.469: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:13:02.469: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 15 21:13:02.469: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f500688c6d71b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.160f50069ff2402a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:03.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6248" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":13,"skipped":215,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:03.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:10.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-170" for this suite. STEP: Destroying namespace "nsdeletetest-1899" for this suite. May 15 21:13:10.138: INFO: Namespace nsdeletetest-1899 was already deleted STEP: Destroying namespace "nsdeletetest-5118" for this suite. • [SLOW TEST:6.647 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":14,"skipped":218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:10.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 15 21:13:10.336: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464569 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 21:13:10.336: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464570 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 15 21:13:10.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464573 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 15 21:13:20.403: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464610 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 21:13:20.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464611 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 15 21:13:20.403: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2803 /api/v1/namespaces/watch-2803/configmaps/e2e-watch-test-label-changed 79d97115-9830-4358-b33e-55a6ac73383a 16464613 0 2020-05-15 21:13:10 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:20.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2803" for this suite. • [SLOW TEST:10.279 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":15,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:20.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 21:13:28.555: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 21:13:28.566: INFO: Pod pod-with-prestop-http-hook still exists May 15 21:13:30.567: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 21:13:30.571: INFO: Pod pod-with-prestop-http-hook still exists May 15 21:13:32.567: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 15 21:13:32.571: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:32.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5325" for this suite. • [SLOW TEST:12.165 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":265,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:32.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:13:32.652: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a88576c2-fefd-4f53-b45c-b603bc74550a" in namespace "security-context-test-4667" to be "success or failure" May 15 21:13:32.679: INFO: Pod "busybox-user-65534-a88576c2-fefd-4f53-b45c-b603bc74550a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.036835ms May 15 21:13:34.684: INFO: Pod "busybox-user-65534-a88576c2-fefd-4f53-b45c-b603bc74550a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031637715s May 15 21:13:36.688: INFO: Pod "busybox-user-65534-a88576c2-fefd-4f53-b45c-b603bc74550a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035587109s May 15 21:13:36.688: INFO: Pod "busybox-user-65534-a88576c2-fefd-4f53-b45c-b603bc74550a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:13:36.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4667" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:13:36.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9924 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 21:13:36.783: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 21:14:06.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.242:8080/dial?request=hostname&protocol=udp&host=10.244.1.241&port=8081&tries=1'] Namespace:pod-network-test-9924 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:14:06.952: INFO: >>> kubeConfig: /root/.kube/config I0515 21:14:06.983988 6 log.go:172] (0xc002cde2c0) (0xc002342640) Create stream I0515 21:14:06.984026 6 log.go:172] (0xc002cde2c0) (0xc002342640) Stream added, broadcasting: 1 I0515 21:14:06.985996 6 log.go:172] (0xc002cde2c0) Reply frame received for 1 I0515 21:14:06.986028 6 log.go:172] (0xc002cde2c0) (0xc0023426e0) Create stream I0515 21:14:06.986036 6 log.go:172] (0xc002cde2c0) (0xc0023426e0) Stream added, broadcasting: 3 I0515 21:14:06.986951 6 log.go:172] (0xc002cde2c0) Reply frame received for 3 I0515 21:14:06.987008 6 log.go:172] (0xc002cde2c0) (0xc002342780) Create stream I0515 21:14:06.987022 6 log.go:172] (0xc002cde2c0) (0xc002342780) Stream added, broadcasting: 5 I0515 21:14:06.987759 6 log.go:172] (0xc002cde2c0) Reply frame received for 5 I0515 21:14:07.126187 6 log.go:172] (0xc002cde2c0) Data frame received for 3 I0515 21:14:07.126223 6 log.go:172] (0xc0023426e0) (3) Data frame handling I0515 21:14:07.126248 6 log.go:172] (0xc0023426e0) (3) Data frame sent I0515 21:14:07.126269 6 log.go:172] (0xc002cde2c0) Data frame received for 3 I0515 21:14:07.126287 6 log.go:172] (0xc0023426e0) (3) Data frame handling I0515 21:14:07.126642 6 log.go:172] (0xc002cde2c0) Data frame received for 5 I0515 21:14:07.126668 6 log.go:172] (0xc002342780) (5) Data frame handling I0515 21:14:07.128159 6 log.go:172] (0xc002cde2c0) Data frame received for 1 I0515 21:14:07.128171 6 log.go:172] (0xc002342640) (1) Data frame handling I0515 21:14:07.128181 6 log.go:172] (0xc002342640) (1) Data frame sent I0515 21:14:07.128191 6 log.go:172] (0xc002cde2c0) (0xc002342640) Stream removed, broadcasting: 1 I0515 21:14:07.128466 6 log.go:172] (0xc002cde2c0) Go away received I0515 21:14:07.128488 6 log.go:172] (0xc002cde2c0) (0xc002342640) Stream removed, broadcasting: 1 I0515 21:14:07.128498 6 log.go:172] (0xc002cde2c0) (0xc0023426e0) Stream removed, broadcasting: 3 I0515 21:14:07.128503 6 log.go:172] (0xc002cde2c0) (0xc002342780) Stream removed, broadcasting: 5 May 15 21:14:07.128: INFO: Waiting for responses: map[] May 15 21:14:07.131: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.242:8080/dial?request=hostname&protocol=udp&host=10.244.2.85&port=8081&tries=1'] Namespace:pod-network-test-9924 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:14:07.131: INFO: >>> kubeConfig: /root/.kube/config I0515 21:14:07.163807 6 log.go:172] (0xc002432370) (0xc00276c820) Create stream I0515 21:14:07.163831 6 log.go:172] (0xc002432370) (0xc00276c820) Stream added, broadcasting: 1 I0515 21:14:07.166560 6 log.go:172] (0xc002432370) Reply frame received for 1 I0515 21:14:07.166600 6 log.go:172] (0xc002432370) (0xc0028b8000) Create stream I0515 21:14:07.166615 6 log.go:172] (0xc002432370) (0xc0028b8000) Stream added, broadcasting: 3 I0515 21:14:07.167261 6 log.go:172] (0xc002432370) Reply frame received for 3 I0515 21:14:07.167296 6 log.go:172] (0xc002432370) (0xc001e38460) Create stream I0515 21:14:07.167311 6 log.go:172] (0xc002432370) (0xc001e38460) Stream added, broadcasting: 5 I0515 21:14:07.167817 6 log.go:172] (0xc002432370) Reply frame received for 5 I0515 21:14:07.228958 6 log.go:172] (0xc002432370) Data frame received for 3 I0515 21:14:07.228981 6 log.go:172] (0xc0028b8000) (3) Data frame handling I0515 21:14:07.228994 6 log.go:172] (0xc0028b8000) (3) Data frame sent I0515 21:14:07.229244 6 log.go:172] (0xc002432370) Data frame received for 3 I0515 21:14:07.229257 6 log.go:172] (0xc0028b8000) (3) Data frame handling I0515 21:14:07.229279 6 log.go:172] (0xc002432370) Data frame received for 5 I0515 21:14:07.229290 6 log.go:172] (0xc001e38460) (5) Data frame handling I0515 21:14:07.230390 6 log.go:172] (0xc002432370) Data frame received for 1 I0515 21:14:07.230410 6 log.go:172] (0xc00276c820) (1) Data frame handling I0515 21:14:07.230421 6 log.go:172] (0xc00276c820) (1) Data frame sent I0515 21:14:07.230433 6 log.go:172] (0xc002432370) (0xc00276c820) Stream removed, broadcasting: 1 I0515 21:14:07.230450 6 log.go:172] (0xc002432370) Go away received I0515 21:14:07.230568 6 log.go:172] (0xc002432370) (0xc00276c820) Stream removed, broadcasting: 1 I0515 21:14:07.230586 6 log.go:172] (0xc002432370) (0xc0028b8000) Stream removed, broadcasting: 3 I0515 21:14:07.230603 6 log.go:172] (0xc002432370) (0xc001e38460) Stream removed, broadcasting: 5 May 15 21:14:07.230: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9924" for this suite. • [SLOW TEST:30.530 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":311,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:07.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-4ac6e608-3b7d-4adb-886d-cfd882a03684 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:13.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6176" for this suite. • [SLOW TEST:6.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":320,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:13.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 15 21:14:18.362: INFO: Successfully updated pod "annotationupdate71f7924f-8242-4b4f-a6e3-00205201fce8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:20.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-444" for this suite. • [SLOW TEST:6.863 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":323,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:20.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2869.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2869.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 21:14:27.048: INFO: DNS probes using dns-2869/dns-test-3a96b56a-7a45-4afe-85de-30ae0b3ebde2 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:27.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2869" for this suite. • [SLOW TEST:6.755 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":21,"skipped":327,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:27.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 15 21:14:27.651: INFO: Pod name pod-release: Found 0 pods out of 1 May 15 21:14:32.655: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:33.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4794" for this suite. • [SLOW TEST:6.590 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":22,"skipped":338,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:33.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:50.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5162" for this suite. • [SLOW TEST:16.682 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":23,"skipped":355,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:50.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-db272db0-89b7-4064-a3fd-df1ca1892e2d STEP: Creating a pod to test consume configMaps May 15 21:14:50.576: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b" in namespace "projected-2285" to be "success or failure" May 15 21:14:50.592: INFO: Pod "pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.180589ms May 15 21:14:52.596: INFO: Pod "pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019315399s May 15 21:14:54.600: INFO: Pod "pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023203625s STEP: Saw pod success May 15 21:14:54.600: INFO: Pod "pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b" satisfied condition "success or failure" May 15 21:14:54.603: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b container projected-configmap-volume-test: STEP: delete the pod May 15 21:14:54.652: INFO: Waiting for pod pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b to disappear May 15 21:14:54.661: INFO: Pod pod-projected-configmaps-0f883341-8359-46cb-8883-27088454f57b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:14:54.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2285" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:14:54.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:14:54.793: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 15 21:14:54.853: INFO: Pod name sample-pod: Found 0 pods out of 1 May 15 21:14:59.856: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 21:14:59.856: INFO: Creating deployment "test-rolling-update-deployment" May 15 21:14:59.859: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 15 21:14:59.867: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 15 21:15:01.875: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 15 21:15:01.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174100, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174100, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174100, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:15:03.882: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 15 21:15:03.891: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-952 /apis/apps/v1/namespaces/deployment-952/deployments/test-rolling-update-deployment bfec5d28-a949-4023-b0f1-664ef730b951 16465330 1 2020-05-15 21:14:59 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001e23578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 21:15:00 +0000 UTC,LastTransitionTime:2020-05-15 21:15:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-15 21:15:03 +0000 UTC,LastTransitionTime:2020-05-15 21:14:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 21:15:03.894: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-952 /apis/apps/v1/namespaces/deployment-952/replicasets/test-rolling-update-deployment-67cf4f6444 34b0fd8d-b96a-4457-82fb-354f7232e70a 16465319 1 2020-05-15 21:14:59 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment bfec5d28-a949-4023-b0f1-664ef730b951 0xc001dd5837 0xc001dd5838}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001dd58a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 21:15:03.894: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 15 21:15:03.895: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-952 /apis/apps/v1/namespaces/deployment-952/replicasets/test-rolling-update-controller f5ce4eac-9019-4c94-8c26-1dac930a7a5c 16465329 2 2020-05-15 21:14:54 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment bfec5d28-a949-4023-b0f1-664ef730b951 0xc001dd5767 0xc001dd5768}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001dd57c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 21:15:03.898: INFO: Pod "test-rolling-update-deployment-67cf4f6444-l7qfv" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-l7qfv test-rolling-update-deployment-67cf4f6444- deployment-952 /api/v1/namespaces/deployment-952/pods/test-rolling-update-deployment-67cf4f6444-l7qfv 8edaa5fe-3ca3-4c52-9ca4-0b07f9487854 16465318 0 2020-05-15 21:14:59 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 34b0fd8d-b96a-4457-82fb-354f7232e70a 0xc001dd5d07 0xc001dd5d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g68f2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g68f2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g68f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:14:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:15:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:14:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.248,StartTime:2020-05-15 21:14:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:15:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a54061c6a6d6a5567d26ecdfdf00f978a3e5dfd621020a046b5aff7dd6ef5eab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:15:03.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-952" for this suite. • [SLOW TEST:9.235 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":25,"skipped":404,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:15:03.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 21:15:03.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6233' May 15 21:15:07.148: INFO: stderr: "" May 15 21:15:07.148: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 15 21:15:07.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6233' May 15 21:15:19.549: INFO: stderr: "" May 15 21:15:19.549: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:15:19.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6233" for this suite. • [SLOW TEST:15.658 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":26,"skipped":416,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:15:19.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:15:19.664: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 15 21:15:19.674: INFO: Number of nodes with available pods: 0 May 15 21:15:19.674: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 15 21:15:19.735: INFO: Number of nodes with available pods: 0 May 15 21:15:19.735: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:20.742: INFO: Number of nodes with available pods: 0 May 15 21:15:20.742: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:22.011: INFO: Number of nodes with available pods: 0 May 15 21:15:22.011: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:22.772: INFO: Number of nodes with available pods: 0 May 15 21:15:22.772: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:23.739: INFO: Number of nodes with available pods: 0 May 15 21:15:23.739: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:24.738: INFO: Number of nodes with available pods: 1 May 15 21:15:24.738: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 15 21:15:24.829: INFO: Number of nodes with available pods: 1 May 15 21:15:24.829: INFO: Number of running nodes: 0, number of available pods: 1 May 15 21:15:25.873: INFO: Number of nodes with available pods: 0 May 15 21:15:25.873: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 15 21:15:25.914: INFO: Number of nodes with available pods: 0 May 15 21:15:25.914: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:26.918: INFO: Number of nodes with available pods: 0 May 15 21:15:26.918: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:27.933: INFO: Number of nodes with available pods: 0 May 15 21:15:27.933: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:28.919: INFO: Number of nodes with available pods: 0 May 15 21:15:28.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:29.942: INFO: Number of nodes with available pods: 0 May 15 21:15:29.942: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:30.919: INFO: Number of nodes with available pods: 0 May 15 21:15:30.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:31.919: INFO: Number of nodes with available pods: 0 May 15 21:15:31.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:32.919: INFO: Number of nodes with available pods: 0 May 15 21:15:32.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:33.919: INFO: Number of nodes with available pods: 0 May 15 21:15:33.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:34.918: INFO: Number of nodes with available pods: 0 May 15 21:15:34.918: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:35.918: INFO: Number of nodes with available pods: 0 May 15 21:15:35.918: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:36.917: INFO: Number of nodes with available pods: 0 May 15 21:15:36.917: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:37.918: INFO: Number of nodes with available pods: 0 May 15 21:15:37.918: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:38.917: INFO: Number of nodes with available pods: 0 May 15 21:15:38.917: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:39.919: INFO: Number of nodes with available pods: 0 May 15 21:15:39.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:40.918: INFO: Number of nodes with available pods: 0 May 15 21:15:40.918: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:41.919: INFO: Number of nodes with available pods: 0 May 15 21:15:41.919: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:15:42.919: INFO: Number of nodes with available pods: 1 May 15 21:15:42.919: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3772, will wait for the garbage collector to delete the pods May 15 21:15:42.984: INFO: Deleting DaemonSet.extensions daemon-set took: 6.441768ms May 15 21:15:43.284: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.321635ms May 15 21:15:49.588: INFO: Number of nodes with available pods: 0 May 15 21:15:49.588: INFO: Number of running nodes: 0, number of available pods: 0 May 15 21:15:49.594: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3772/daemonsets","resourceVersion":"16465559"},"items":null} May 15 21:15:49.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3772/pods","resourceVersion":"16465559"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:15:49.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3772" for this suite. • [SLOW TEST:30.078 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":27,"skipped":433,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:15:49.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-4801580e-b422-4101-b65f-ba814bac7e43 STEP: Creating a pod to test consume secrets May 15 21:15:49.742: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0" in namespace "projected-5755" to be "success or failure" May 15 21:15:49.758: INFO: Pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.745084ms May 15 21:15:51.783: INFO: Pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040701832s May 15 21:15:53.788: INFO: Pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0": Phase="Running", Reason="", readiness=true. Elapsed: 4.045279551s May 15 21:15:55.792: INFO: Pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050021263s STEP: Saw pod success May 15 21:15:55.792: INFO: Pod "pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0" satisfied condition "success or failure" May 15 21:15:55.796: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0 container secret-volume-test: STEP: delete the pod May 15 21:15:55.820: INFO: Waiting for pod pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0 to disappear May 15 21:15:55.825: INFO: Pod pod-projected-secrets-6fd9d989-75f0-40df-8b5a-c24c309ac1f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:15:55.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5755" for this suite. • [SLOW TEST:6.189 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:15:55.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 15 21:15:55.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1994' May 15 21:15:56.277: INFO: stderr: "" May 15 21:15:56.277: INFO: stdout: "pod/pause created\n" May 15 21:15:56.277: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 15 21:15:56.277: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1994" to be "running and ready" May 15 21:15:56.293: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.669877ms May 15 21:15:58.297: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020376118s May 15 21:16:00.301: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.024209862s May 15 21:16:00.301: INFO: Pod "pause" satisfied condition "running and ready" May 15 21:16:00.301: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 15 21:16:00.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1994' May 15 21:16:00.408: INFO: stderr: "" May 15 21:16:00.408: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 15 21:16:00.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1994' May 15 21:16:00.487: INFO: stderr: "" May 15 21:16:00.487: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 15 21:16:00.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1994' May 15 21:16:00.584: INFO: stderr: "" May 15 21:16:00.584: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 15 21:16:00.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1994' May 15 21:16:00.741: INFO: stderr: "" May 15 21:16:00.741: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 15 21:16:00.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1994' May 15 21:16:00.861: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 21:16:00.861: INFO: stdout: "pod \"pause\" force deleted\n" May 15 21:16:00.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1994' May 15 21:16:00.951: INFO: stderr: "No resources found in kubectl-1994 namespace.\n" May 15 21:16:00.951: INFO: stdout: "" May 15 21:16:00.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1994 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 21:16:01.059: INFO: stderr: "" May 15 21:16:01.059: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:16:01.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1994" for this suite. • [SLOW TEST:5.297 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":29,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:16:01.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 15 21:16:01.435: INFO: Waiting up to 5m0s for pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b" in namespace "emptydir-1862" to be "success or failure" May 15 21:16:01.468: INFO: Pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.10755ms May 15 21:16:03.471: INFO: Pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036259872s May 15 21:16:05.475: INFO: Pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b": Phase="Running", Reason="", readiness=true. Elapsed: 4.04086207s May 15 21:16:07.479: INFO: Pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044911968s STEP: Saw pod success May 15 21:16:07.480: INFO: Pod "pod-519951b7-a875-4f3d-8ca4-1ef41418050b" satisfied condition "success or failure" May 15 21:16:07.483: INFO: Trying to get logs from node jerma-worker pod pod-519951b7-a875-4f3d-8ca4-1ef41418050b container test-container: STEP: delete the pod May 15 21:16:07.512: INFO: Waiting for pod pod-519951b7-a875-4f3d-8ca4-1ef41418050b to disappear May 15 21:16:07.529: INFO: Pod pod-519951b7-a875-4f3d-8ca4-1ef41418050b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:16:07.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1862" for this suite. • [SLOW TEST:6.406 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":493,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:16:07.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-27d46613-e93f-42de-b728-3a39008955e9 STEP: Creating a pod to test consume configMaps May 15 21:16:07.633: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1" in namespace "projected-7876" to be "success or failure" May 15 21:16:07.643: INFO: Pod "pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213967ms May 15 21:16:09.647: INFO: Pod "pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014498326s May 15 21:16:11.652: INFO: Pod "pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019194109s STEP: Saw pod success May 15 21:16:11.652: INFO: Pod "pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1" satisfied condition "success or failure" May 15 21:16:11.656: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1 container projected-configmap-volume-test: STEP: delete the pod May 15 21:16:11.686: INFO: Waiting for pod pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1 to disappear May 15 21:16:11.690: INFO: Pod pod-projected-configmaps-172e5aff-709c-4270-9b02-17aab0b79bf1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:16:11.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7876" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":494,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:16:11.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-193354cc-7de2-47ef-b268-906aac666554 STEP: Creating a pod to test consume secrets May 15 21:16:11.798: INFO: Waiting up to 5m0s for pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a" in namespace "secrets-6126" to be "success or failure" May 15 21:16:11.805: INFO: Pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.846693ms May 15 21:16:13.808: INFO: Pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01005707s May 15 21:16:15.811: INFO: Pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a": Phase="Running", Reason="", readiness=true. Elapsed: 4.013012414s May 15 21:16:17.816: INFO: Pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017207497s STEP: Saw pod success May 15 21:16:17.816: INFO: Pod "pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a" satisfied condition "success or failure" May 15 21:16:17.819: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a container secret-volume-test: STEP: delete the pod May 15 21:16:17.860: INFO: Waiting for pod pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a to disappear May 15 21:16:17.898: INFO: Pod pod-secrets-d6cf150f-be72-4a97-bd00-2668c59e229a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:16:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6126" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:16:17.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2922 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2922 STEP: creating replication controller externalsvc in namespace services-2922 I0515 21:16:18.120343 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2922, replica count: 2 I0515 21:16:21.170726 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 21:16:24.170961 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 15 21:16:24.234: INFO: Creating new exec pod May 15 21:16:28.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2922 execpodlm8m5 -- /bin/sh -x -c nslookup clusterip-service' May 15 21:16:28.550: INFO: stderr: "I0515 21:16:28.400022 233 log.go:172] (0xc000558790) (0xc0006dfc20) Create stream\nI0515 21:16:28.400060 233 log.go:172] (0xc000558790) (0xc0006dfc20) Stream added, broadcasting: 1\nI0515 21:16:28.402427 233 log.go:172] (0xc000558790) Reply frame received for 1\nI0515 21:16:28.402477 233 log.go:172] (0xc000558790) (0xc00052c000) Create stream\nI0515 21:16:28.402495 233 log.go:172] (0xc000558790) (0xc00052c000) Stream added, broadcasting: 3\nI0515 21:16:28.403283 233 log.go:172] (0xc000558790) Reply frame received for 3\nI0515 21:16:28.403314 233 log.go:172] (0xc000558790) (0xc0006dfe00) Create stream\nI0515 21:16:28.403325 233 log.go:172] (0xc000558790) (0xc0006dfe00) Stream added, broadcasting: 5\nI0515 21:16:28.404019 233 log.go:172] (0xc000558790) Reply frame received for 5\nI0515 21:16:28.465406 233 log.go:172] (0xc000558790) Data frame received for 5\nI0515 21:16:28.465419 233 log.go:172] (0xc0006dfe00) (5) Data frame handling\nI0515 21:16:28.465427 233 log.go:172] (0xc0006dfe00) (5) Data frame sent\n+ nslookup clusterip-service\nI0515 21:16:28.543834 233 log.go:172] (0xc000558790) Data frame received for 3\nI0515 21:16:28.543871 233 log.go:172] (0xc00052c000) (3) Data frame handling\nI0515 21:16:28.543904 233 log.go:172] (0xc00052c000) (3) Data frame sent\nI0515 21:16:28.544573 233 log.go:172] (0xc000558790) Data frame received for 3\nI0515 21:16:28.544674 233 log.go:172] (0xc00052c000) (3) Data frame handling\nI0515 21:16:28.544768 233 log.go:172] (0xc00052c000) (3) Data frame sent\nI0515 21:16:28.544940 233 log.go:172] (0xc000558790) Data frame received for 3\nI0515 21:16:28.544963 233 log.go:172] (0xc00052c000) (3) Data frame handling\nI0515 21:16:28.544985 233 log.go:172] (0xc000558790) Data frame received for 5\nI0515 21:16:28.545008 233 log.go:172] (0xc0006dfe00) (5) Data frame handling\nI0515 21:16:28.546160 233 log.go:172] (0xc000558790) Data frame received for 1\nI0515 21:16:28.546173 233 log.go:172] (0xc0006dfc20) (1) Data frame handling\nI0515 21:16:28.546183 233 log.go:172] (0xc0006dfc20) (1) Data frame sent\nI0515 21:16:28.546337 233 log.go:172] (0xc000558790) (0xc0006dfc20) Stream removed, broadcasting: 1\nI0515 21:16:28.546352 233 log.go:172] (0xc000558790) Go away received\nI0515 21:16:28.546628 233 log.go:172] (0xc000558790) (0xc0006dfc20) Stream removed, broadcasting: 1\nI0515 21:16:28.546640 233 log.go:172] (0xc000558790) (0xc00052c000) Stream removed, broadcasting: 3\nI0515 21:16:28.546648 233 log.go:172] (0xc000558790) (0xc0006dfe00) Stream removed, broadcasting: 5\n" May 15 21:16:28.550: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2922.svc.cluster.local\tcanonical name = externalsvc.services-2922.svc.cluster.local.\nName:\texternalsvc.services-2922.svc.cluster.local\nAddress: 10.106.202.143\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2922, will wait for the garbage collector to delete the pods May 15 21:16:28.608: INFO: Deleting ReplicationController externalsvc took: 5.46952ms May 15 21:16:28.909: INFO: Terminating ReplicationController externalsvc pods took: 300.200482ms May 15 21:16:39.595: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:16:39.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2922" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.743 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":33,"skipped":525,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:16:39.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3128.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3128.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3128.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3128.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.202.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.202.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.202.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.202.113_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3128.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3128.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3128.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3128.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3128.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3128.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 113.202.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.202.113_udp@PTR;check="$$(dig +tcp +noall +answer +search 113.202.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.202.113_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 21:16:47.889: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.893: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.896: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.923: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.926: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.929: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.932: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:47.947: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:16:52.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.955: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.962: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.979: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.982: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:52.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:53.002: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:16:57.952: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.955: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.961: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.983: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.988: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:57.994: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:16:58.011: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:17:02.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.957: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.961: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.964: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.988: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.990: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:02.996: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:03.014: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:17:07.951: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.954: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.956: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.976: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.978: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.981: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.983: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:07.997: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:17:12.953: INFO: Unable to read wheezy_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.957: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.962: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.985: INFO: Unable to read jessie_udp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.988: INFO: Unable to read jessie_tcp@dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:12.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local from pod dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566: the server could not find the requested resource (get pods dns-test-33e7da82-79bb-4872-ab71-60a664ee5566) May 15 21:17:13.010: INFO: Lookups using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 failed for: [wheezy_udp@dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@dns-test-service.dns-3128.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_udp@dns-test-service.dns-3128.svc.cluster.local jessie_tcp@dns-test-service.dns-3128.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3128.svc.cluster.local] May 15 21:17:17.994: INFO: DNS probes using dns-3128/dns-test-33e7da82-79bb-4872-ab71-60a664ee5566 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:17:19.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3128" for this suite. • [SLOW TEST:39.412 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":34,"skipped":535,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:17:19.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6147 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 21:17:19.236: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 21:17:45.428: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6147 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:17:45.428: INFO: >>> kubeConfig: /root/.kube/config I0515 21:17:45.453875 6 log.go:172] (0xc002432d10) (0xc00190c960) Create stream I0515 21:17:45.453902 6 log.go:172] (0xc002432d10) (0xc00190c960) Stream added, broadcasting: 1 I0515 21:17:45.455139 6 log.go:172] (0xc002432d10) Reply frame received for 1 I0515 21:17:45.455166 6 log.go:172] (0xc002432d10) (0xc001980500) Create stream I0515 21:17:45.455173 6 log.go:172] (0xc002432d10) (0xc001980500) Stream added, broadcasting: 3 I0515 21:17:45.455858 6 log.go:172] (0xc002432d10) Reply frame received for 3 I0515 21:17:45.455888 6 log.go:172] (0xc002432d10) (0xc0019805a0) Create stream I0515 21:17:45.455903 6 log.go:172] (0xc002432d10) (0xc0019805a0) Stream added, broadcasting: 5 I0515 21:17:45.456626 6 log.go:172] (0xc002432d10) Reply frame received for 5 I0515 21:17:46.534491 6 log.go:172] (0xc002432d10) Data frame received for 5 I0515 21:17:46.534537 6 log.go:172] (0xc0019805a0) (5) Data frame handling I0515 21:17:46.534563 6 log.go:172] (0xc002432d10) Data frame received for 3 I0515 21:17:46.534576 6 log.go:172] (0xc001980500) (3) Data frame handling I0515 21:17:46.534595 6 log.go:172] (0xc001980500) (3) Data frame sent I0515 21:17:46.534630 6 log.go:172] (0xc002432d10) Data frame received for 3 I0515 21:17:46.534658 6 log.go:172] (0xc001980500) (3) Data frame handling I0515 21:17:46.535984 6 log.go:172] (0xc002432d10) Data frame received for 1 I0515 21:17:46.535997 6 log.go:172] (0xc00190c960) (1) Data frame handling I0515 21:17:46.536006 6 log.go:172] (0xc00190c960) (1) Data frame sent I0515 21:17:46.536018 6 log.go:172] (0xc002432d10) (0xc00190c960) Stream removed, broadcasting: 1 I0515 21:17:46.536070 6 log.go:172] (0xc002432d10) (0xc00190c960) Stream removed, broadcasting: 1 I0515 21:17:46.536081 6 log.go:172] (0xc002432d10) (0xc001980500) Stream removed, broadcasting: 3 I0515 21:17:46.536144 6 log.go:172] (0xc002432d10) Go away received I0515 21:17:46.536174 6 log.go:172] (0xc002432d10) (0xc0019805a0) Stream removed, broadcasting: 5 May 15 21:17:46.536: INFO: Found all expected endpoints: [netserver-0] May 15 21:17:46.538: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.96 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6147 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:17:46.538: INFO: >>> kubeConfig: /root/.kube/config I0515 21:17:46.566834 6 log.go:172] (0xc0011a0840) (0xc001ae5ea0) Create stream I0515 21:17:46.566855 6 log.go:172] (0xc0011a0840) (0xc001ae5ea0) Stream added, broadcasting: 1 I0515 21:17:46.568190 6 log.go:172] (0xc0011a0840) Reply frame received for 1 I0515 21:17:46.568214 6 log.go:172] (0xc0011a0840) (0xc001980640) Create stream I0515 21:17:46.568225 6 log.go:172] (0xc0011a0840) (0xc001980640) Stream added, broadcasting: 3 I0515 21:17:46.569024 6 log.go:172] (0xc0011a0840) Reply frame received for 3 I0515 21:17:46.569053 6 log.go:172] (0xc0011a0840) (0xc002951ea0) Create stream I0515 21:17:46.569070 6 log.go:172] (0xc0011a0840) (0xc002951ea0) Stream added, broadcasting: 5 I0515 21:17:46.570064 6 log.go:172] (0xc0011a0840) Reply frame received for 5 I0515 21:17:47.650753 6 log.go:172] (0xc0011a0840) Data frame received for 3 I0515 21:17:47.650799 6 log.go:172] (0xc001980640) (3) Data frame handling I0515 21:17:47.650837 6 log.go:172] (0xc001980640) (3) Data frame sent I0515 21:17:47.651079 6 log.go:172] (0xc0011a0840) Data frame received for 5 I0515 21:17:47.651100 6 log.go:172] (0xc002951ea0) (5) Data frame handling I0515 21:17:47.651143 6 log.go:172] (0xc0011a0840) Data frame received for 3 I0515 21:17:47.651173 6 log.go:172] (0xc001980640) (3) Data frame handling I0515 21:17:47.653769 6 log.go:172] (0xc0011a0840) Data frame received for 1 I0515 21:17:47.653779 6 log.go:172] (0xc001ae5ea0) (1) Data frame handling I0515 21:17:47.653785 6 log.go:172] (0xc001ae5ea0) (1) Data frame sent I0515 21:17:47.653949 6 log.go:172] (0xc0011a0840) (0xc001ae5ea0) Stream removed, broadcasting: 1 I0515 21:17:47.654010 6 log.go:172] (0xc0011a0840) Go away received I0515 21:17:47.654076 6 log.go:172] (0xc0011a0840) (0xc001ae5ea0) Stream removed, broadcasting: 1 I0515 21:17:47.654124 6 log.go:172] (0xc0011a0840) (0xc001980640) Stream removed, broadcasting: 3 I0515 21:17:47.654158 6 log.go:172] (0xc0011a0840) (0xc002951ea0) Stream removed, broadcasting: 5 May 15 21:17:47.654: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:17:47.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6147" for this suite. • [SLOW TEST:28.600 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":538,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:17:47.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 15 21:17:47.777: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 21:17:47.803: INFO: Waiting for terminating namespaces to be deleted... May 15 21:17:47.805: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 15 21:17:47.809: INFO: netserver-0 from pod-network-test-6147 started at 2020-05-15 21:17:19 +0000 UTC (1 container statuses recorded) May 15 21:17:47.809: INFO: Container webserver ready: true, restart count 0 May 15 21:17:47.809: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:17:47.809: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:17:47.809: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:17:47.809: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:17:47.809: INFO: test-container-pod from pod-network-test-6147 started at 2020-05-15 21:17:41 +0000 UTC (1 container statuses recorded) May 15 21:17:47.810: INFO: Container webserver ready: true, restart count 0 May 15 21:17:47.810: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 15 21:17:47.850: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container kube-hunter ready: false, restart count 0 May 15 21:17:47.850: INFO: netserver-1 from pod-network-test-6147 started at 2020-05-15 21:17:19 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container webserver ready: true, restart count 0 May 15 21:17:47.850: INFO: host-test-container-pod from pod-network-test-6147 started at 2020-05-15 21:17:41 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container agnhost ready: true, restart count 0 May 15 21:17:47.850: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:17:47.850: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container kube-bench ready: false, restart count 0 May 15 21:17:47.850: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:17:47.850: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d07b4292-3b12-4a26-8e28-0c90a6f89a04 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d07b4292-3b12-4a26-8e28-0c90a6f89a04 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d07b4292-3b12-4a26-8e28-0c90a6f89a04 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:17:58.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7559" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:10.428 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":36,"skipped":550,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:17:58.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 15 21:18:05.240: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:06.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3585" for this suite. • [SLOW TEST:8.178 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":37,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:06.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:18:07.799: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:18:09.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174287, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174287, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174287, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174287, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:18:13.091: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 15 21:18:17.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-6847 to-be-attached-pod -i -c=container1' May 15 21:18:17.263: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:17.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6847" for this suite. STEP: Destroying namespace "webhook-6847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.128 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":38,"skipped":650,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:17.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-3e3d2e65-f989-4bff-bb29-9c993ab6ca53 STEP: Creating a pod to test consume configMaps May 15 21:18:17.508: INFO: Waiting up to 5m0s for pod "pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6" in namespace "configmap-2403" to be "success or failure" May 15 21:18:17.518: INFO: Pod "pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.058954ms May 15 21:18:19.540: INFO: Pod "pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03254227s May 15 21:18:21.545: INFO: Pod "pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037481451s STEP: Saw pod success May 15 21:18:21.545: INFO: Pod "pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6" satisfied condition "success or failure" May 15 21:18:21.549: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6 container configmap-volume-test: STEP: delete the pod May 15 21:18:21.585: INFO: Waiting for pod pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6 to disappear May 15 21:18:21.590: INFO: Pod pod-configmaps-377d4271-e6f6-4040-af27-227f9b4eb0b6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:21.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2403" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":650,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:21.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 21:18:21.671: INFO: Waiting up to 5m0s for pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b" in namespace "emptydir-4922" to be "success or failure" May 15 21:18:21.680: INFO: Pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936907ms May 15 21:18:23.751: INFO: Pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079178271s May 15 21:18:25.753: INFO: Pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b": Phase="Running", Reason="", readiness=true. Elapsed: 4.081904521s May 15 21:18:27.756: INFO: Pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084672236s STEP: Saw pod success May 15 21:18:27.756: INFO: Pod "pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b" satisfied condition "success or failure" May 15 21:18:27.758: INFO: Trying to get logs from node jerma-worker pod pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b container test-container: STEP: delete the pod May 15 21:18:27.791: INFO: Waiting for pod pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b to disappear May 15 21:18:27.803: INFO: Pod pod-15b64dd6-6b31-47cb-aa1e-d6187bcaf40b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:27.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4922" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":672,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:27.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 15 21:18:27.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6075 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 15 21:18:30.621: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0515 21:18:30.539104 273 log.go:172] (0xc000a5aa50) (0xc000705c20) Create stream\nI0515 21:18:30.539190 273 log.go:172] (0xc000a5aa50) (0xc000705c20) Stream added, broadcasting: 1\nI0515 21:18:30.541991 273 log.go:172] (0xc000a5aa50) Reply frame received for 1\nI0515 21:18:30.542047 273 log.go:172] (0xc000a5aa50) (0xc000694000) Create stream\nI0515 21:18:30.542064 273 log.go:172] (0xc000a5aa50) (0xc000694000) Stream added, broadcasting: 3\nI0515 21:18:30.542974 273 log.go:172] (0xc000a5aa50) Reply frame received for 3\nI0515 21:18:30.543045 273 log.go:172] (0xc000a5aa50) (0xc0006940a0) Create stream\nI0515 21:18:30.543062 273 log.go:172] (0xc000a5aa50) (0xc0006940a0) Stream added, broadcasting: 5\nI0515 21:18:30.544125 273 log.go:172] (0xc000a5aa50) Reply frame received for 5\nI0515 21:18:30.544173 273 log.go:172] (0xc000a5aa50) (0xc000705cc0) Create stream\nI0515 21:18:30.544196 273 log.go:172] (0xc000a5aa50) (0xc000705cc0) Stream added, broadcasting: 7\nI0515 21:18:30.545404 273 log.go:172] (0xc000a5aa50) Reply frame received for 7\nI0515 21:18:30.545584 273 log.go:172] (0xc000694000) (3) Writing data frame\nI0515 21:18:30.545699 273 log.go:172] (0xc000694000) (3) Writing data frame\nI0515 21:18:30.546553 273 log.go:172] (0xc000a5aa50) Data frame received for 5\nI0515 21:18:30.546569 273 log.go:172] (0xc0006940a0) (5) Data frame handling\nI0515 21:18:30.546595 273 log.go:172] (0xc0006940a0) (5) Data frame sent\nI0515 21:18:30.547093 273 log.go:172] (0xc000a5aa50) Data frame received for 5\nI0515 21:18:30.547105 273 log.go:172] (0xc0006940a0) (5) Data frame handling\nI0515 21:18:30.547111 273 log.go:172] (0xc0006940a0) (5) Data frame sent\nI0515 21:18:30.587616 273 log.go:172] (0xc000a5aa50) Data frame received for 5\nI0515 21:18:30.587666 273 log.go:172] (0xc0006940a0) (5) Data frame handling\nI0515 21:18:30.587841 273 log.go:172] (0xc000a5aa50) Data frame received for 7\nI0515 21:18:30.587880 273 log.go:172] (0xc000705cc0) (7) Data frame handling\nI0515 21:18:30.588165 273 log.go:172] (0xc000a5aa50) Data frame received for 1\nI0515 21:18:30.588179 273 log.go:172] (0xc000705c20) (1) Data frame handling\nI0515 21:18:30.588185 273 log.go:172] (0xc000705c20) (1) Data frame sent\nI0515 21:18:30.588193 273 log.go:172] (0xc000a5aa50) (0xc000705c20) Stream removed, broadcasting: 1\nI0515 21:18:30.588300 273 log.go:172] (0xc000a5aa50) (0xc000694000) Stream removed, broadcasting: 3\nI0515 21:18:30.588373 273 log.go:172] (0xc000a5aa50) Go away received\nI0515 21:18:30.588494 273 log.go:172] (0xc000a5aa50) (0xc000705c20) Stream removed, broadcasting: 1\nI0515 21:18:30.588551 273 log.go:172] (0xc000a5aa50) (0xc000694000) Stream removed, broadcasting: 3\nI0515 21:18:30.588569 273 log.go:172] (0xc000a5aa50) (0xc0006940a0) Stream removed, broadcasting: 5\nI0515 21:18:30.588578 273 log.go:172] (0xc000a5aa50) (0xc000705cc0) Stream removed, broadcasting: 7\n" May 15 21:18:30.621: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:32.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6075" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":41,"skipped":686,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:32.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:18:32.747: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:33.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9255" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":42,"skipped":689,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:33.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 15 21:18:38.070: INFO: &Pod{ObjectMeta:{send-events-52fdacec-a7e3-4089-a62a-6271b8057edc events-9660 /api/v1/namespaces/events-9660/pods/send-events-52fdacec-a7e3-4089-a62a-6271b8057edc d0333cb0-625a-4155-8b87-8c249ff6a4ff 16466673 0 2020-05-15 21:18:34 +0000 UTC map[name:foo time:33508323] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xsl4l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xsl4l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xsl4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:18:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:18:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:18:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.100,StartTime:2020-05-15 21:18:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:18:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://ff6f4eaec91debc2f816b1431d959bfb4086e489db097e4689e42d27a7a71f4e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.100,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 15 21:18:40.074: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 15 21:18:42.078: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:42.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9660" for this suite. • [SLOW TEST:8.179 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":43,"skipped":691,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:42.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-86a14be7-7935-47dc-981c-4c170cd00dce STEP: Creating a pod to test consume secrets May 15 21:18:42.238: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc" in namespace "projected-9396" to be "success or failure" May 15 21:18:42.241: INFO: Pod "pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.682708ms May 15 21:18:44.246: INFO: Pod "pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007974566s May 15 21:18:46.250: INFO: Pod "pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012489537s STEP: Saw pod success May 15 21:18:46.250: INFO: Pod "pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc" satisfied condition "success or failure" May 15 21:18:46.254: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc container projected-secret-volume-test: STEP: delete the pod May 15 21:18:46.296: INFO: Waiting for pod pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc to disappear May 15 21:18:46.313: INFO: Pod pod-projected-secrets-25b684fa-42e9-49c2-94cc-36efef06bacc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:46.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9396" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":693,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:46.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 15 21:18:46.393: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 15 21:18:46.978: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 15 21:18:49.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174327, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:18:51.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174327, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174326, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:18:54.199: INFO: Waited 622.772237ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:18:54.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7844" for this suite. • [SLOW TEST:8.760 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":45,"skipped":701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:18:55.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0515 21:19:05.489837 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 21:19:05.489: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:19:05.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9113" for this suite. • [SLOW TEST:10.411 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":46,"skipped":783,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:19:05.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:19:05.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1856" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:19:05.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9e96bd00-50ea-4fb6-8829-4cf59317e501 in namespace container-probe-9222 May 15 21:19:10.073: INFO: Started pod busybox-9e96bd00-50ea-4fb6-8829-4cf59317e501 in namespace container-probe-9222 STEP: checking the pod's current state and verifying that restartCount is present May 15 21:19:10.076: INFO: Initial restart count of pod busybox-9e96bd00-50ea-4fb6-8829-4cf59317e501 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:10.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9222" for this suite. • [SLOW TEST:245.043 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":870,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:10.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:23:11.584: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:23:13.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:23:15.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174591, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:23:18.800: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:18.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8403" for this suite. STEP: Destroying namespace "webhook-8403-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.193 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":49,"skipped":880,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:19.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 15 21:23:19.350: INFO: Waiting up to 5m0s for pod "pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63" in namespace "emptydir-614" to be "success or failure" May 15 21:23:19.378: INFO: Pod "pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63": Phase="Pending", Reason="", readiness=false. Elapsed: 28.283948ms May 15 21:23:21.382: INFO: Pod "pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032133012s May 15 21:23:23.387: INFO: Pod "pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037128118s STEP: Saw pod success May 15 21:23:23.387: INFO: Pod "pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63" satisfied condition "success or failure" May 15 21:23:23.390: INFO: Trying to get logs from node jerma-worker2 pod pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63 container test-container: STEP: delete the pod May 15 21:23:23.445: INFO: Waiting for pod pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63 to disappear May 15 21:23:23.447: INFO: Pod pod-57abd8c4-a7f4-451d-9703-fbb7f7c45b63 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:23.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-614" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":936,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:23.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:36.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6475" for this suite. • [SLOW TEST:13.197 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":51,"skipped":944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:36.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:23:36.753: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 21:23:38.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1123 create -f -' May 15 21:23:42.014: INFO: stderr: "" May 15 21:23:42.014: INFO: stdout: "e2e-test-crd-publish-openapi-9495-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 15 21:23:42.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1123 delete e2e-test-crd-publish-openapi-9495-crds test-cr' May 15 21:23:42.227: INFO: stderr: "" May 15 21:23:42.227: INFO: stdout: "e2e-test-crd-publish-openapi-9495-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 15 21:23:42.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1123 apply -f -' May 15 21:23:42.480: INFO: stderr: "" May 15 21:23:42.480: INFO: stdout: "e2e-test-crd-publish-openapi-9495-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 15 21:23:42.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1123 delete e2e-test-crd-publish-openapi-9495-crds test-cr' May 15 21:23:42.593: INFO: stderr: "" May 15 21:23:42.593: INFO: stdout: "e2e-test-crd-publish-openapi-9495-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 15 21:23:42.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9495-crds' May 15 21:23:42.816: INFO: stderr: "" May 15 21:23:42.816: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9495-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:45.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1123" for this suite. • [SLOW TEST:9.059 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":52,"skipped":984,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:45.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:23:45.836: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:49.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-294" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":989,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:49.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:23:50.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:23:52.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:23:54.645: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174630, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:23:57.719: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:23:57.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2197" for this suite. STEP: Destroying namespace "webhook-2197-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.905 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":54,"skipped":991,"failed":0} S ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:23:57.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:23:58.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5" in namespace "downward-api-5421" to be "success or failure" May 15 21:23:58.252: INFO: Pod "downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 164.788008ms May 15 21:24:00.256: INFO: Pod "downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167903233s May 15 21:24:02.260: INFO: Pod "downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172399397s STEP: Saw pod success May 15 21:24:02.260: INFO: Pod "downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5" satisfied condition "success or failure" May 15 21:24:02.264: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5 container client-container: STEP: delete the pod May 15 21:24:02.455: INFO: Waiting for pod downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5 to disappear May 15 21:24:02.521: INFO: Pod downwardapi-volume-09b9e1f8-ed00-4b8e-9233-41035bee6ca5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:24:02.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5421" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":992,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:24:02.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-233 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-233 I0515 21:24:02.747750 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-233, replica count: 2 I0515 21:24:05.798163 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 21:24:08.798405 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 21:24:08.798: INFO: Creating new exec pod May 15 21:24:15.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpod2p7t2 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 15 21:24:16.126: INFO: stderr: "I0515 21:24:15.978037 407 log.go:172] (0xc0008d8580) (0xc0005488c0) Create stream\nI0515 21:24:15.978105 407 log.go:172] (0xc0008d8580) (0xc0005488c0) Stream added, broadcasting: 1\nI0515 21:24:15.980970 407 log.go:172] (0xc0008d8580) Reply frame received for 1\nI0515 21:24:15.981021 407 log.go:172] (0xc0008d8580) (0xc00088a000) Create stream\nI0515 21:24:15.981036 407 log.go:172] (0xc0008d8580) (0xc00088a000) Stream added, broadcasting: 3\nI0515 21:24:15.982159 407 log.go:172] (0xc0008d8580) Reply frame received for 3\nI0515 21:24:15.982210 407 log.go:172] (0xc0008d8580) (0xc000726be0) Create stream\nI0515 21:24:15.982232 407 log.go:172] (0xc0008d8580) (0xc000726be0) Stream added, broadcasting: 5\nI0515 21:24:15.983168 407 log.go:172] (0xc0008d8580) Reply frame received for 5\nI0515 21:24:16.103883 407 log.go:172] (0xc0008d8580) Data frame received for 5\nI0515 21:24:16.103912 407 log.go:172] (0xc000726be0) (5) Data frame handling\nI0515 21:24:16.103938 407 log.go:172] (0xc000726be0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0515 21:24:16.118411 407 log.go:172] (0xc0008d8580) Data frame received for 5\nI0515 21:24:16.118433 407 log.go:172] (0xc000726be0) (5) Data frame handling\nI0515 21:24:16.118450 407 log.go:172] (0xc000726be0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0515 21:24:16.118782 407 log.go:172] (0xc0008d8580) Data frame received for 3\nI0515 21:24:16.118819 407 log.go:172] (0xc00088a000) (3) Data frame handling\nI0515 21:24:16.118987 407 log.go:172] (0xc0008d8580) Data frame received for 5\nI0515 21:24:16.119012 407 log.go:172] (0xc000726be0) (5) Data frame handling\nI0515 21:24:16.120879 407 log.go:172] (0xc0008d8580) Data frame received for 1\nI0515 21:24:16.120943 407 log.go:172] (0xc0005488c0) (1) Data frame handling\nI0515 21:24:16.120968 407 log.go:172] (0xc0005488c0) (1) Data frame sent\nI0515 21:24:16.121029 407 log.go:172] (0xc0008d8580) (0xc0005488c0) Stream removed, broadcasting: 1\nI0515 21:24:16.121093 407 log.go:172] (0xc0008d8580) Go away received\nI0515 21:24:16.121626 407 log.go:172] (0xc0008d8580) (0xc0005488c0) Stream removed, broadcasting: 1\nI0515 21:24:16.121648 407 log.go:172] (0xc0008d8580) (0xc00088a000) Stream removed, broadcasting: 3\nI0515 21:24:16.121660 407 log.go:172] (0xc0008d8580) (0xc000726be0) Stream removed, broadcasting: 5\n" May 15 21:24:16.126: INFO: stdout: "" May 15 21:24:16.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpod2p7t2 -- /bin/sh -x -c nc -zv -t -w 2 10.106.218.81 80' May 15 21:24:16.341: INFO: stderr: "I0515 21:24:16.268989 427 log.go:172] (0xc0009c0000) (0xc000737400) Create stream\nI0515 21:24:16.269064 427 log.go:172] (0xc0009c0000) (0xc000737400) Stream added, broadcasting: 1\nI0515 21:24:16.272166 427 log.go:172] (0xc0009c0000) Reply frame received for 1\nI0515 21:24:16.272214 427 log.go:172] (0xc0009c0000) (0xc0008bc000) Create stream\nI0515 21:24:16.272232 427 log.go:172] (0xc0009c0000) (0xc0008bc000) Stream added, broadcasting: 3\nI0515 21:24:16.273367 427 log.go:172] (0xc0009c0000) Reply frame received for 3\nI0515 21:24:16.273402 427 log.go:172] (0xc0009c0000) (0xc0006d39a0) Create stream\nI0515 21:24:16.273423 427 log.go:172] (0xc0009c0000) (0xc0006d39a0) Stream added, broadcasting: 5\nI0515 21:24:16.274403 427 log.go:172] (0xc0009c0000) Reply frame received for 5\nI0515 21:24:16.334035 427 log.go:172] (0xc0009c0000) Data frame received for 3\nI0515 21:24:16.334059 427 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0515 21:24:16.334312 427 log.go:172] (0xc0009c0000) Data frame received for 5\nI0515 21:24:16.334333 427 log.go:172] (0xc0006d39a0) (5) Data frame handling\nI0515 21:24:16.334363 427 log.go:172] (0xc0006d39a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.106.218.81 80\nConnection to 10.106.218.81 80 port [tcp/http] succeeded!\nI0515 21:24:16.334485 427 log.go:172] (0xc0009c0000) Data frame received for 5\nI0515 21:24:16.334506 427 log.go:172] (0xc0006d39a0) (5) Data frame handling\nI0515 21:24:16.336051 427 log.go:172] (0xc0009c0000) Data frame received for 1\nI0515 21:24:16.336072 427 log.go:172] (0xc000737400) (1) Data frame handling\nI0515 21:24:16.336099 427 log.go:172] (0xc000737400) (1) Data frame sent\nI0515 21:24:16.336130 427 log.go:172] (0xc0009c0000) (0xc000737400) Stream removed, broadcasting: 1\nI0515 21:24:16.336156 427 log.go:172] (0xc0009c0000) Go away received\nI0515 21:24:16.336533 427 log.go:172] (0xc0009c0000) (0xc000737400) Stream removed, broadcasting: 1\nI0515 21:24:16.336552 427 log.go:172] (0xc0009c0000) (0xc0008bc000) Stream removed, broadcasting: 3\nI0515 21:24:16.336562 427 log.go:172] (0xc0009c0000) (0xc0006d39a0) Stream removed, broadcasting: 5\n" May 15 21:24:16.341: INFO: stdout: "" May 15 21:24:16.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpod2p7t2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30688' May 15 21:24:16.607: INFO: stderr: "I0515 21:24:16.506346 447 log.go:172] (0xc000ae8630) (0xc00098e0a0) Create stream\nI0515 21:24:16.506421 447 log.go:172] (0xc000ae8630) (0xc00098e0a0) Stream added, broadcasting: 1\nI0515 21:24:16.508391 447 log.go:172] (0xc000ae8630) Reply frame received for 1\nI0515 21:24:16.508430 447 log.go:172] (0xc000ae8630) (0xc000798000) Create stream\nI0515 21:24:16.508446 447 log.go:172] (0xc000ae8630) (0xc000798000) Stream added, broadcasting: 3\nI0515 21:24:16.509804 447 log.go:172] (0xc000ae8630) Reply frame received for 3\nI0515 21:24:16.509840 447 log.go:172] (0xc000ae8630) (0xc0007980a0) Create stream\nI0515 21:24:16.509862 447 log.go:172] (0xc000ae8630) (0xc0007980a0) Stream added, broadcasting: 5\nI0515 21:24:16.510668 447 log.go:172] (0xc000ae8630) Reply frame received for 5\nI0515 21:24:16.594835 447 log.go:172] (0xc000ae8630) Data frame received for 5\nI0515 21:24:16.594875 447 log.go:172] (0xc0007980a0) (5) Data frame handling\nI0515 21:24:16.594888 447 log.go:172] (0xc0007980a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30688\nI0515 21:24:16.595703 447 log.go:172] (0xc000ae8630) Data frame received for 5\nI0515 21:24:16.595739 447 log.go:172] (0xc0007980a0) (5) Data frame handling\nI0515 21:24:16.595765 447 log.go:172] (0xc0007980a0) (5) Data frame sent\nConnection to 172.17.0.10 30688 port [tcp/30688] succeeded!\nI0515 21:24:16.596173 447 log.go:172] (0xc000ae8630) Data frame received for 5\nI0515 21:24:16.596190 447 log.go:172] (0xc0007980a0) (5) Data frame handling\nI0515 21:24:16.596768 447 log.go:172] (0xc000ae8630) Data frame received for 3\nI0515 21:24:16.596782 447 log.go:172] (0xc000798000) (3) Data frame handling\nI0515 21:24:16.599963 447 log.go:172] (0xc000ae8630) Data frame received for 1\nI0515 21:24:16.599981 447 log.go:172] (0xc00098e0a0) (1) Data frame handling\nI0515 21:24:16.599994 447 log.go:172] (0xc00098e0a0) (1) Data frame sent\nI0515 21:24:16.601059 447 log.go:172] (0xc000ae8630) (0xc00098e0a0) Stream removed, broadcasting: 1\nI0515 21:24:16.601415 447 log.go:172] (0xc000ae8630) (0xc00098e0a0) Stream removed, broadcasting: 1\nI0515 21:24:16.601435 447 log.go:172] (0xc000ae8630) (0xc000798000) Stream removed, broadcasting: 3\nI0515 21:24:16.601989 447 log.go:172] (0xc000ae8630) (0xc0007980a0) Stream removed, broadcasting: 5\n" May 15 21:24:16.607: INFO: stdout: "" May 15 21:24:16.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-233 execpod2p7t2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30688' May 15 21:24:16.817: INFO: stderr: "I0515 21:24:16.726122 465 log.go:172] (0xc0001042c0) (0xc0005ae780) Create stream\nI0515 21:24:16.726192 465 log.go:172] (0xc0001042c0) (0xc0005ae780) Stream added, broadcasting: 1\nI0515 21:24:16.729802 465 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0515 21:24:16.729844 465 log.go:172] (0xc0001042c0) (0xc00043b540) Create stream\nI0515 21:24:16.729857 465 log.go:172] (0xc0001042c0) (0xc00043b540) Stream added, broadcasting: 3\nI0515 21:24:16.730916 465 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0515 21:24:16.730964 465 log.go:172] (0xc0001042c0) (0xc00043b5e0) Create stream\nI0515 21:24:16.730985 465 log.go:172] (0xc0001042c0) (0xc00043b5e0) Stream added, broadcasting: 5\nI0515 21:24:16.732056 465 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0515 21:24:16.810627 465 log.go:172] (0xc0001042c0) Data frame received for 5\nI0515 21:24:16.810691 465 log.go:172] (0xc00043b5e0) (5) Data frame handling\nI0515 21:24:16.810727 465 log.go:172] (0xc00043b5e0) (5) Data frame sent\nI0515 21:24:16.810762 465 log.go:172] (0xc0001042c0) Data frame received for 5\nI0515 21:24:16.810790 465 log.go:172] (0xc00043b5e0) (5) Data frame handling\nI0515 21:24:16.810810 465 log.go:172] (0xc0001042c0) Data frame received for 3\n+ nc -zv -t -w 2 172.17.0.8 30688\nConnection to 172.17.0.8 30688 port [tcp/30688] succeeded!\nI0515 21:24:16.810846 465 log.go:172] (0xc00043b540) (3) Data frame handling\nI0515 21:24:16.812231 465 log.go:172] (0xc0001042c0) Data frame received for 1\nI0515 21:24:16.812256 465 log.go:172] (0xc0005ae780) (1) Data frame handling\nI0515 21:24:16.812273 465 log.go:172] (0xc0005ae780) (1) Data frame sent\nI0515 21:24:16.812292 465 log.go:172] (0xc0001042c0) (0xc0005ae780) Stream removed, broadcasting: 1\nI0515 21:24:16.812318 465 log.go:172] (0xc0001042c0) Go away received\nI0515 21:24:16.812776 465 log.go:172] (0xc0001042c0) (0xc0005ae780) Stream removed, broadcasting: 1\nI0515 21:24:16.812799 465 log.go:172] (0xc0001042c0) (0xc00043b540) Stream removed, broadcasting: 3\nI0515 21:24:16.812811 465 log.go:172] (0xc0001042c0) (0xc00043b5e0) Stream removed, broadcasting: 5\n" May 15 21:24:16.817: INFO: stdout: "" May 15 21:24:16.817: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:24:16.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-233" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.332 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":56,"skipped":994,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:24:16.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 21:24:16.935: INFO: Waiting up to 5m0s for pod "pod-973c4e80-828f-440a-982a-8037c54cfdff" in namespace "emptydir-9812" to be "success or failure" May 15 21:24:16.948: INFO: Pod "pod-973c4e80-828f-440a-982a-8037c54cfdff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.793315ms May 15 21:24:18.952: INFO: Pod "pod-973c4e80-828f-440a-982a-8037c54cfdff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016729456s May 15 21:24:20.957: INFO: Pod "pod-973c4e80-828f-440a-982a-8037c54cfdff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021639705s STEP: Saw pod success May 15 21:24:20.957: INFO: Pod "pod-973c4e80-828f-440a-982a-8037c54cfdff" satisfied condition "success or failure" May 15 21:24:20.960: INFO: Trying to get logs from node jerma-worker2 pod pod-973c4e80-828f-440a-982a-8037c54cfdff container test-container: STEP: delete the pod May 15 21:24:20.982: INFO: Waiting for pod pod-973c4e80-828f-440a-982a-8037c54cfdff to disappear May 15 21:24:20.987: INFO: Pod pod-973c4e80-828f-440a-982a-8037c54cfdff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:24:20.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9812" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1024,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:24:20.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 21:24:21.254: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:21.257: INFO: Number of nodes with available pods: 0 May 15 21:24:21.257: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:22.350: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:22.352: INFO: Number of nodes with available pods: 0 May 15 21:24:22.352: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:23.331: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:23.411: INFO: Number of nodes with available pods: 0 May 15 21:24:23.411: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:24.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:24.978: INFO: Number of nodes with available pods: 0 May 15 21:24:24.978: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:25.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:25.403: INFO: Number of nodes with available pods: 0 May 15 21:24:25.403: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:26.325: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:26.328: INFO: Number of nodes with available pods: 0 May 15 21:24:26.328: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:27.262: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:27.265: INFO: Number of nodes with available pods: 1 May 15 21:24:27.265: INFO: Node jerma-worker2 is running more than one daemon pod May 15 21:24:28.262: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:28.266: INFO: Number of nodes with available pods: 2 May 15 21:24:28.266: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 15 21:24:28.294: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:28.332: INFO: Number of nodes with available pods: 1 May 15 21:24:28.332: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:29.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:29.502: INFO: Number of nodes with available pods: 1 May 15 21:24:29.502: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:30.338: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:30.342: INFO: Number of nodes with available pods: 1 May 15 21:24:30.342: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:31.343: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:31.347: INFO: Number of nodes with available pods: 1 May 15 21:24:31.347: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:32.337: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:32.341: INFO: Number of nodes with available pods: 1 May 15 21:24:32.341: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:33.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:33.339: INFO: Number of nodes with available pods: 1 May 15 21:24:33.339: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:34.347: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:34.350: INFO: Number of nodes with available pods: 1 May 15 21:24:34.350: INFO: Node jerma-worker is running more than one daemon pod May 15 21:24:35.338: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:24:35.342: INFO: Number of nodes with available pods: 2 May 15 21:24:35.342: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2531, will wait for the garbage collector to delete the pods May 15 21:24:35.409: INFO: Deleting DaemonSet.extensions daemon-set took: 10.670312ms May 15 21:24:35.509: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.340155ms May 15 21:24:49.512: INFO: Number of nodes with available pods: 0 May 15 21:24:49.512: INFO: Number of running nodes: 0, number of available pods: 0 May 15 21:24:49.515: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2531/daemonsets","resourceVersion":"16468365"},"items":null} May 15 21:24:49.518: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2531/pods","resourceVersion":"16468365"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:24:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2531" for this suite. • [SLOW TEST:28.538 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":58,"skipped":1040,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:24:49.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 21:24:53.657: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:24:53.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7205" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:24:53.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2414 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2414 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2414 May 15 21:24:54.281: INFO: Found 0 stateful pods, waiting for 1 May 15 21:25:04.286: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 15 21:25:04.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:25:04.605: INFO: stderr: "I0515 21:25:04.468251 484 log.go:172] (0xc000c14000) (0xc000612820) Create stream\nI0515 21:25:04.468286 484 log.go:172] (0xc000c14000) (0xc000612820) Stream added, broadcasting: 1\nI0515 21:25:04.470271 484 log.go:172] (0xc000c14000) Reply frame received for 1\nI0515 21:25:04.470307 484 log.go:172] (0xc000c14000) (0xc00064bd60) Create stream\nI0515 21:25:04.470320 484 log.go:172] (0xc000c14000) (0xc00064bd60) Stream added, broadcasting: 3\nI0515 21:25:04.470996 484 log.go:172] (0xc000c14000) Reply frame received for 3\nI0515 21:25:04.471016 484 log.go:172] (0xc000c14000) (0xc00064be00) Create stream\nI0515 21:25:04.471027 484 log.go:172] (0xc000c14000) (0xc00064be00) Stream added, broadcasting: 5\nI0515 21:25:04.471846 484 log.go:172] (0xc000c14000) Reply frame received for 5\nI0515 21:25:04.546224 484 log.go:172] (0xc000c14000) Data frame received for 5\nI0515 21:25:04.546237 484 log.go:172] (0xc00064be00) (5) Data frame handling\nI0515 21:25:04.546246 484 log.go:172] (0xc00064be00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:25:04.599125 484 log.go:172] (0xc000c14000) Data frame received for 3\nI0515 21:25:04.599148 484 log.go:172] (0xc00064bd60) (3) Data frame handling\nI0515 21:25:04.599160 484 log.go:172] (0xc00064bd60) (3) Data frame sent\nI0515 21:25:04.599171 484 log.go:172] (0xc000c14000) Data frame received for 3\nI0515 21:25:04.599189 484 log.go:172] (0xc00064bd60) (3) Data frame handling\nI0515 21:25:04.599271 484 log.go:172] (0xc000c14000) Data frame received for 5\nI0515 21:25:04.599291 484 log.go:172] (0xc00064be00) (5) Data frame handling\nI0515 21:25:04.600595 484 log.go:172] (0xc000c14000) Data frame received for 1\nI0515 21:25:04.600619 484 log.go:172] (0xc000612820) (1) Data frame handling\nI0515 21:25:04.600633 484 log.go:172] (0xc000612820) (1) Data frame sent\nI0515 21:25:04.600646 484 log.go:172] (0xc000c14000) (0xc000612820) Stream removed, broadcasting: 1\nI0515 21:25:04.600830 484 log.go:172] (0xc000c14000) Go away received\nI0515 21:25:04.600872 484 log.go:172] (0xc000c14000) (0xc000612820) Stream removed, broadcasting: 1\nI0515 21:25:04.600949 484 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc00064bd60), 0x5:(*spdystream.Stream)(0xc00064be00)}\nI0515 21:25:04.600995 484 log.go:172] (0xc000c14000) (0xc00064bd60) Stream removed, broadcasting: 3\nI0515 21:25:04.601033 484 log.go:172] (0xc000c14000) (0xc00064be00) Stream removed, broadcasting: 5\n" May 15 21:25:04.605: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:25:04.605: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:25:04.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 21:25:14.633: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 21:25:14.633: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:25:14.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999961s May 15 21:25:15.650: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99413359s May 15 21:25:16.654: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991039812s May 15 21:25:17.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987530485s May 15 21:25:18.684: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962535736s May 15 21:25:19.688: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957518807s May 15 21:25:20.702: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.953010969s May 15 21:25:21.707: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.939451572s May 15 21:25:22.714: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.933941533s May 15 21:25:23.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 927.462595ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2414 May 15 21:25:24.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:25:24.968: INFO: stderr: "I0515 21:25:24.847022 504 log.go:172] (0xc000af6000) (0xc0000e8dc0) Create stream\nI0515 21:25:24.847070 504 log.go:172] (0xc000af6000) (0xc0000e8dc0) Stream added, broadcasting: 1\nI0515 21:25:24.848529 504 log.go:172] (0xc000af6000) Reply frame received for 1\nI0515 21:25:24.848562 504 log.go:172] (0xc000af6000) (0xc00078e000) Create stream\nI0515 21:25:24.848570 504 log.go:172] (0xc000af6000) (0xc00078e000) Stream added, broadcasting: 3\nI0515 21:25:24.849569 504 log.go:172] (0xc000af6000) Reply frame received for 3\nI0515 21:25:24.849594 504 log.go:172] (0xc000af6000) (0xc000716000) Create stream\nI0515 21:25:24.849600 504 log.go:172] (0xc000af6000) (0xc000716000) Stream added, broadcasting: 5\nI0515 21:25:24.850407 504 log.go:172] (0xc000af6000) Reply frame received for 5\nI0515 21:25:24.962253 504 log.go:172] (0xc000af6000) Data frame received for 3\nI0515 21:25:24.962304 504 log.go:172] (0xc00078e000) (3) Data frame handling\nI0515 21:25:24.962320 504 log.go:172] (0xc00078e000) (3) Data frame sent\nI0515 21:25:24.962333 504 log.go:172] (0xc000af6000) Data frame received for 3\nI0515 21:25:24.962344 504 log.go:172] (0xc00078e000) (3) Data frame handling\nI0515 21:25:24.962380 504 log.go:172] (0xc000af6000) Data frame received for 5\nI0515 21:25:24.962393 504 log.go:172] (0xc000716000) (5) Data frame handling\nI0515 21:25:24.962405 504 log.go:172] (0xc000716000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 21:25:24.962480 504 log.go:172] (0xc000af6000) Data frame received for 5\nI0515 21:25:24.962492 504 log.go:172] (0xc000716000) (5) Data frame handling\nI0515 21:25:24.963683 504 log.go:172] (0xc000af6000) Data frame received for 1\nI0515 21:25:24.963710 504 log.go:172] (0xc0000e8dc0) (1) Data frame handling\nI0515 21:25:24.963739 504 log.go:172] (0xc0000e8dc0) (1) Data frame sent\nI0515 21:25:24.963766 504 log.go:172] (0xc000af6000) (0xc0000e8dc0) Stream removed, broadcasting: 1\nI0515 21:25:24.963785 504 log.go:172] (0xc000af6000) Go away received\nI0515 21:25:24.964242 504 log.go:172] (0xc000af6000) (0xc0000e8dc0) Stream removed, broadcasting: 1\nI0515 21:25:24.964259 504 log.go:172] (0xc000af6000) (0xc00078e000) Stream removed, broadcasting: 3\nI0515 21:25:24.964267 504 log.go:172] (0xc000af6000) (0xc000716000) Stream removed, broadcasting: 5\n" May 15 21:25:24.968: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:25:24.968: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:25:24.984: INFO: Found 1 stateful pods, waiting for 3 May 15 21:25:34.990: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 21:25:34.990: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 21:25:34.990: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 15 21:25:35.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:25:35.208: INFO: stderr: "I0515 21:25:35.127662 521 log.go:172] (0xc000bca000) (0xc00094a000) Create stream\nI0515 21:25:35.127731 521 log.go:172] (0xc000bca000) (0xc00094a000) Stream added, broadcasting: 1\nI0515 21:25:35.131446 521 log.go:172] (0xc000bca000) Reply frame received for 1\nI0515 21:25:35.131581 521 log.go:172] (0xc000bca000) (0xc00094a0a0) Create stream\nI0515 21:25:35.131593 521 log.go:172] (0xc000bca000) (0xc00094a0a0) Stream added, broadcasting: 3\nI0515 21:25:35.132620 521 log.go:172] (0xc000bca000) Reply frame received for 3\nI0515 21:25:35.132668 521 log.go:172] (0xc000bca000) (0xc00094a140) Create stream\nI0515 21:25:35.132683 521 log.go:172] (0xc000bca000) (0xc00094a140) Stream added, broadcasting: 5\nI0515 21:25:35.134130 521 log.go:172] (0xc000bca000) Reply frame received for 5\nI0515 21:25:35.201636 521 log.go:172] (0xc000bca000) Data frame received for 3\nI0515 21:25:35.201690 521 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0515 21:25:35.201716 521 log.go:172] (0xc00094a0a0) (3) Data frame sent\nI0515 21:25:35.201750 521 log.go:172] (0xc000bca000) Data frame received for 3\nI0515 21:25:35.201757 521 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0515 21:25:35.201784 521 log.go:172] (0xc000bca000) Data frame received for 5\nI0515 21:25:35.201818 521 log.go:172] (0xc00094a140) (5) Data frame handling\nI0515 21:25:35.201835 521 log.go:172] (0xc00094a140) (5) Data frame sent\nI0515 21:25:35.201847 521 log.go:172] (0xc000bca000) Data frame received for 5\nI0515 21:25:35.201855 521 log.go:172] (0xc00094a140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:25:35.203247 521 log.go:172] (0xc000bca000) Data frame received for 1\nI0515 21:25:35.203264 521 log.go:172] (0xc00094a000) (1) Data frame handling\nI0515 21:25:35.203270 521 log.go:172] (0xc00094a000) (1) Data frame sent\nI0515 21:25:35.203315 521 log.go:172] (0xc000bca000) (0xc00094a000) Stream removed, broadcasting: 1\nI0515 21:25:35.203380 521 log.go:172] (0xc000bca000) Go away received\nI0515 21:25:35.203655 521 log.go:172] (0xc000bca000) (0xc00094a000) Stream removed, broadcasting: 1\nI0515 21:25:35.203678 521 log.go:172] (0xc000bca000) (0xc00094a0a0) Stream removed, broadcasting: 3\nI0515 21:25:35.203691 521 log.go:172] (0xc000bca000) (0xc00094a140) Stream removed, broadcasting: 5\n" May 15 21:25:35.208: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:25:35.208: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:25:35.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:25:35.528: INFO: stderr: "I0515 21:25:35.386270 540 log.go:172] (0xc0009a2000) (0xc00098a000) Create stream\nI0515 21:25:35.386323 540 log.go:172] (0xc0009a2000) (0xc00098a000) Stream added, broadcasting: 1\nI0515 21:25:35.388042 540 log.go:172] (0xc0009a2000) Reply frame received for 1\nI0515 21:25:35.388065 540 log.go:172] (0xc0009a2000) (0xc000a1e0a0) Create stream\nI0515 21:25:35.388073 540 log.go:172] (0xc0009a2000) (0xc000a1e0a0) Stream added, broadcasting: 3\nI0515 21:25:35.388688 540 log.go:172] (0xc0009a2000) Reply frame received for 3\nI0515 21:25:35.388713 540 log.go:172] (0xc0009a2000) (0xc000968000) Create stream\nI0515 21:25:35.388723 540 log.go:172] (0xc0009a2000) (0xc000968000) Stream added, broadcasting: 5\nI0515 21:25:35.389518 540 log.go:172] (0xc0009a2000) Reply frame received for 5\nI0515 21:25:35.445831 540 log.go:172] (0xc0009a2000) Data frame received for 5\nI0515 21:25:35.445859 540 log.go:172] (0xc000968000) (5) Data frame handling\nI0515 21:25:35.445884 540 log.go:172] (0xc000968000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:25:35.518163 540 log.go:172] (0xc0009a2000) Data frame received for 3\nI0515 21:25:35.518197 540 log.go:172] (0xc000a1e0a0) (3) Data frame handling\nI0515 21:25:35.518222 540 log.go:172] (0xc000a1e0a0) (3) Data frame sent\nI0515 21:25:35.518507 540 log.go:172] (0xc0009a2000) Data frame received for 3\nI0515 21:25:35.518520 540 log.go:172] (0xc000a1e0a0) (3) Data frame handling\nI0515 21:25:35.518544 540 log.go:172] (0xc0009a2000) Data frame received for 5\nI0515 21:25:35.518558 540 log.go:172] (0xc000968000) (5) Data frame handling\nI0515 21:25:35.520695 540 log.go:172] (0xc0009a2000) Data frame received for 1\nI0515 21:25:35.521615 540 log.go:172] (0xc00098a000) (1) Data frame handling\nI0515 21:25:35.521682 540 log.go:172] (0xc00098a000) (1) Data frame sent\nI0515 21:25:35.521716 540 log.go:172] (0xc0009a2000) (0xc00098a000) Stream removed, broadcasting: 1\nI0515 21:25:35.521752 540 log.go:172] (0xc0009a2000) Go away received\nI0515 21:25:35.522578 540 log.go:172] (0xc0009a2000) (0xc00098a000) Stream removed, broadcasting: 1\nI0515 21:25:35.522745 540 log.go:172] (0xc0009a2000) (0xc000a1e0a0) Stream removed, broadcasting: 3\nI0515 21:25:35.522802 540 log.go:172] (0xc0009a2000) (0xc000968000) Stream removed, broadcasting: 5\n" May 15 21:25:35.528: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:25:35.528: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:25:35.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:25:35.761: INFO: stderr: "I0515 21:25:35.669671 564 log.go:172] (0xc000ac4d10) (0xc000b546e0) Create stream\nI0515 21:25:35.669725 564 log.go:172] (0xc000ac4d10) (0xc000b546e0) Stream added, broadcasting: 1\nI0515 21:25:35.672233 564 log.go:172] (0xc000ac4d10) Reply frame received for 1\nI0515 21:25:35.672287 564 log.go:172] (0xc000ac4d10) (0xc000aa6140) Create stream\nI0515 21:25:35.672305 564 log.go:172] (0xc000ac4d10) (0xc000aa6140) Stream added, broadcasting: 3\nI0515 21:25:35.673455 564 log.go:172] (0xc000ac4d10) Reply frame received for 3\nI0515 21:25:35.673496 564 log.go:172] (0xc000ac4d10) (0xc000b54780) Create stream\nI0515 21:25:35.673509 564 log.go:172] (0xc000ac4d10) (0xc000b54780) Stream added, broadcasting: 5\nI0515 21:25:35.674328 564 log.go:172] (0xc000ac4d10) Reply frame received for 5\nI0515 21:25:35.726462 564 log.go:172] (0xc000ac4d10) Data frame received for 5\nI0515 21:25:35.726503 564 log.go:172] (0xc000b54780) (5) Data frame handling\nI0515 21:25:35.726544 564 log.go:172] (0xc000b54780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:25:35.752650 564 log.go:172] (0xc000ac4d10) Data frame received for 3\nI0515 21:25:35.752897 564 log.go:172] (0xc000aa6140) (3) Data frame handling\nI0515 21:25:35.753082 564 log.go:172] (0xc000aa6140) (3) Data frame sent\nI0515 21:25:35.753505 564 log.go:172] (0xc000ac4d10) Data frame received for 5\nI0515 21:25:35.753535 564 log.go:172] (0xc000b54780) (5) Data frame handling\nI0515 21:25:35.753552 564 log.go:172] (0xc000ac4d10) Data frame received for 3\nI0515 21:25:35.753562 564 log.go:172] (0xc000aa6140) (3) Data frame handling\nI0515 21:25:35.755706 564 log.go:172] (0xc000ac4d10) Data frame received for 1\nI0515 21:25:35.755742 564 log.go:172] (0xc000b546e0) (1) Data frame handling\nI0515 21:25:35.755782 564 log.go:172] (0xc000b546e0) (1) Data frame sent\nI0515 21:25:35.755819 564 log.go:172] (0xc000ac4d10) (0xc000b546e0) Stream removed, broadcasting: 1\nI0515 21:25:35.755873 564 log.go:172] (0xc000ac4d10) Go away received\nI0515 21:25:35.757065 564 log.go:172] (0xc000ac4d10) (0xc000b546e0) Stream removed, broadcasting: 1\nI0515 21:25:35.757095 564 log.go:172] (0xc000ac4d10) (0xc000aa6140) Stream removed, broadcasting: 3\nI0515 21:25:35.757432 564 log.go:172] (0xc000ac4d10) (0xc000b54780) Stream removed, broadcasting: 5\n" May 15 21:25:35.761: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:25:35.761: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:25:35.761: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:25:35.781: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 15 21:25:45.792: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 21:25:45.792: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 21:25:45.792: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 21:25:45.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999768s May 15 21:25:46.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993010511s May 15 21:25:47.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98722543s May 15 21:25:48.819: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982687105s May 15 21:25:49.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977652817s May 15 21:25:50.829: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973075984s May 15 21:25:51.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967021669s May 15 21:25:52.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963366082s May 15 21:25:53.843: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.956653979s May 15 21:25:54.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.266671ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2414 May 15 21:25:55.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:25:56.049: INFO: stderr: "I0515 21:25:55.965707 584 log.go:172] (0xc00098cc60) (0xc0004c41e0) Create stream\nI0515 21:25:55.965745 584 log.go:172] (0xc00098cc60) (0xc0004c41e0) Stream added, broadcasting: 1\nI0515 21:25:55.967387 584 log.go:172] (0xc00098cc60) Reply frame received for 1\nI0515 21:25:55.967445 584 log.go:172] (0xc00098cc60) (0xc000233ae0) Create stream\nI0515 21:25:55.967462 584 log.go:172] (0xc00098cc60) (0xc000233ae0) Stream added, broadcasting: 3\nI0515 21:25:55.968260 584 log.go:172] (0xc00098cc60) Reply frame received for 3\nI0515 21:25:55.968292 584 log.go:172] (0xc00098cc60) (0xc0004c4280) Create stream\nI0515 21:25:55.968303 584 log.go:172] (0xc00098cc60) (0xc0004c4280) Stream added, broadcasting: 5\nI0515 21:25:55.969087 584 log.go:172] (0xc00098cc60) Reply frame received for 5\nI0515 21:25:56.039581 584 log.go:172] (0xc00098cc60) Data frame received for 5\nI0515 21:25:56.039609 584 log.go:172] (0xc0004c4280) (5) Data frame handling\nI0515 21:25:56.039633 584 log.go:172] (0xc0004c4280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 21:25:56.044325 584 log.go:172] (0xc00098cc60) Data frame received for 3\nI0515 21:25:56.044418 584 log.go:172] (0xc00098cc60) Data frame received for 5\nI0515 21:25:56.044435 584 log.go:172] (0xc0004c4280) (5) Data frame handling\nI0515 21:25:56.044451 584 log.go:172] (0xc000233ae0) (3) Data frame handling\nI0515 21:25:56.044461 584 log.go:172] (0xc000233ae0) (3) Data frame sent\nI0515 21:25:56.044759 584 log.go:172] (0xc00098cc60) Data frame received for 3\nI0515 21:25:56.044773 584 log.go:172] (0xc000233ae0) (3) Data frame handling\nI0515 21:25:56.046093 584 log.go:172] (0xc00098cc60) Data frame received for 1\nI0515 21:25:56.046130 584 log.go:172] (0xc0004c41e0) (1) Data frame handling\nI0515 21:25:56.046150 584 log.go:172] (0xc0004c41e0) (1) Data frame sent\nI0515 21:25:56.046165 584 log.go:172] (0xc00098cc60) (0xc0004c41e0) Stream removed, broadcasting: 1\nI0515 21:25:56.046179 584 log.go:172] (0xc00098cc60) Go away received\nI0515 21:25:56.046472 584 log.go:172] (0xc00098cc60) (0xc0004c41e0) Stream removed, broadcasting: 1\nI0515 21:25:56.046486 584 log.go:172] (0xc00098cc60) (0xc000233ae0) Stream removed, broadcasting: 3\nI0515 21:25:56.046495 584 log.go:172] (0xc00098cc60) (0xc0004c4280) Stream removed, broadcasting: 5\n" May 15 21:25:56.049: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:25:56.049: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:25:56.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:25:56.274: INFO: stderr: "I0515 21:25:56.195394 602 log.go:172] (0xc000990000) (0xc000aac000) Create stream\nI0515 21:25:56.195495 602 log.go:172] (0xc000990000) (0xc000aac000) Stream added, broadcasting: 1\nI0515 21:25:56.197360 602 log.go:172] (0xc000990000) Reply frame received for 1\nI0515 21:25:56.197418 602 log.go:172] (0xc000990000) (0xc0006edb80) Create stream\nI0515 21:25:56.197441 602 log.go:172] (0xc000990000) (0xc0006edb80) Stream added, broadcasting: 3\nI0515 21:25:56.198460 602 log.go:172] (0xc000990000) Reply frame received for 3\nI0515 21:25:56.198498 602 log.go:172] (0xc000990000) (0xc000394000) Create stream\nI0515 21:25:56.198515 602 log.go:172] (0xc000990000) (0xc000394000) Stream added, broadcasting: 5\nI0515 21:25:56.199312 602 log.go:172] (0xc000990000) Reply frame received for 5\nI0515 21:25:56.266956 602 log.go:172] (0xc000990000) Data frame received for 5\nI0515 21:25:56.266989 602 log.go:172] (0xc000394000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 21:25:56.267021 602 log.go:172] (0xc000990000) Data frame received for 3\nI0515 21:25:56.267059 602 log.go:172] (0xc0006edb80) (3) Data frame handling\nI0515 21:25:56.267081 602 log.go:172] (0xc0006edb80) (3) Data frame sent\nI0515 21:25:56.267111 602 log.go:172] (0xc000394000) (5) Data frame sent\nI0515 21:25:56.267161 602 log.go:172] (0xc000990000) Data frame received for 5\nI0515 21:25:56.267180 602 log.go:172] (0xc000394000) (5) Data frame handling\nI0515 21:25:56.267210 602 log.go:172] (0xc000990000) Data frame received for 3\nI0515 21:25:56.267231 602 log.go:172] (0xc0006edb80) (3) Data frame handling\nI0515 21:25:56.268764 602 log.go:172] (0xc000990000) Data frame received for 1\nI0515 21:25:56.268794 602 log.go:172] (0xc000aac000) (1) Data frame handling\nI0515 21:25:56.268861 602 log.go:172] (0xc000aac000) (1) Data frame sent\nI0515 21:25:56.268883 602 log.go:172] (0xc000990000) (0xc000aac000) Stream removed, broadcasting: 1\nI0515 21:25:56.268903 602 log.go:172] (0xc000990000) Go away received\nI0515 21:25:56.269470 602 log.go:172] (0xc000990000) (0xc000aac000) Stream removed, broadcasting: 1\nI0515 21:25:56.269498 602 log.go:172] (0xc000990000) (0xc0006edb80) Stream removed, broadcasting: 3\nI0515 21:25:56.269514 602 log.go:172] (0xc000990000) (0xc000394000) Stream removed, broadcasting: 5\n" May 15 21:25:56.274: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:25:56.274: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:25:56.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2414 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:25:56.501: INFO: stderr: "I0515 21:25:56.414148 624 log.go:172] (0xc0009926e0) (0xc0008c0280) Create stream\nI0515 21:25:56.414210 624 log.go:172] (0xc0009926e0) (0xc0008c0280) Stream added, broadcasting: 1\nI0515 21:25:56.416941 624 log.go:172] (0xc0009926e0) Reply frame received for 1\nI0515 21:25:56.416973 624 log.go:172] (0xc0009926e0) (0xc0005ea6e0) Create stream\nI0515 21:25:56.416984 624 log.go:172] (0xc0009926e0) (0xc0005ea6e0) Stream added, broadcasting: 3\nI0515 21:25:56.418336 624 log.go:172] (0xc0009926e0) Reply frame received for 3\nI0515 21:25:56.418375 624 log.go:172] (0xc0009926e0) (0xc0008c0320) Create stream\nI0515 21:25:56.418385 624 log.go:172] (0xc0009926e0) (0xc0008c0320) Stream added, broadcasting: 5\nI0515 21:25:56.419442 624 log.go:172] (0xc0009926e0) Reply frame received for 5\nI0515 21:25:56.494651 624 log.go:172] (0xc0009926e0) Data frame received for 3\nI0515 21:25:56.494685 624 log.go:172] (0xc0005ea6e0) (3) Data frame handling\nI0515 21:25:56.494724 624 log.go:172] (0xc0009926e0) Data frame received for 5\nI0515 21:25:56.494746 624 log.go:172] (0xc0008c0320) (5) Data frame handling\nI0515 21:25:56.494760 624 log.go:172] (0xc0008c0320) (5) Data frame sent\nI0515 21:25:56.494773 624 log.go:172] (0xc0009926e0) Data frame received for 5\nI0515 21:25:56.494783 624 log.go:172] (0xc0008c0320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 21:25:56.494819 624 log.go:172] (0xc0005ea6e0) (3) Data frame sent\nI0515 21:25:56.494853 624 log.go:172] (0xc0009926e0) Data frame received for 3\nI0515 21:25:56.494865 624 log.go:172] (0xc0005ea6e0) (3) Data frame handling\nI0515 21:25:56.496012 624 log.go:172] (0xc0009926e0) Data frame received for 1\nI0515 21:25:56.496085 624 log.go:172] (0xc0008c0280) (1) Data frame handling\nI0515 21:25:56.496117 624 log.go:172] (0xc0008c0280) (1) Data frame sent\nI0515 21:25:56.496139 624 log.go:172] (0xc0009926e0) (0xc0008c0280) Stream removed, broadcasting: 1\nI0515 21:25:56.496157 624 log.go:172] (0xc0009926e0) Go away received\nI0515 21:25:56.496657 624 log.go:172] (0xc0009926e0) (0xc0008c0280) Stream removed, broadcasting: 1\nI0515 21:25:56.496677 624 log.go:172] (0xc0009926e0) (0xc0005ea6e0) Stream removed, broadcasting: 3\nI0515 21:25:56.496700 624 log.go:172] (0xc0009926e0) (0xc0008c0320) Stream removed, broadcasting: 5\n" May 15 21:25:56.501: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:25:56.501: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:25:56.501: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 21:26:16.517: INFO: Deleting all statefulset in ns statefulset-2414 May 15 21:26:16.521: INFO: Scaling statefulset ss to 0 May 15 21:26:16.530: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:26:16.532: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:26:16.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2414" for this suite. • [SLOW TEST:82.578 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":60,"skipped":1073,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:26:16.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:26:16.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310" in namespace "projected-2190" to be "success or failure" May 15 21:26:16.718: INFO: Pod "downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448381ms May 15 21:26:18.787: INFO: Pod "downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071775001s May 15 21:26:20.791: INFO: Pod "downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075573664s STEP: Saw pod success May 15 21:26:20.791: INFO: Pod "downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310" satisfied condition "success or failure" May 15 21:26:20.794: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310 container client-container: STEP: delete the pod May 15 21:26:21.032: INFO: Waiting for pod downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310 to disappear May 15 21:26:21.074: INFO: Pod downwardapi-volume-8e0af805-5150-4fbf-a767-8eb4815e1310 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:26:21.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2190" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1080,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:26:21.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 15 21:26:21.365: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 21:26:21.374: INFO: Waiting for terminating namespaces to be deleted... May 15 21:26:21.376: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 15 21:26:21.387: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:26:21.387: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:26:21.387: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:26:21.387: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:26:21.387: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 15 21:26:21.392: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 15 21:26:21.392: INFO: Container kube-hunter ready: false, restart count 0 May 15 21:26:21.392: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:26:21.392: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:26:21.392: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 15 21:26:21.392: INFO: Container kube-bench ready: false, restart count 0 May 15 21:26:21.392: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:26:21.392: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 15 21:26:21.543: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 15 21:26:21.543: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 15 21:26:21.543: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 15 21:26:21.543: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 15 21:26:21.543: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 15 21:26:21.549: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5d63019e-3977-4c21-8879-c23b0e010056.160f50c095ac1713], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7996/filler-pod-5d63019e-3977-4c21-8879-c23b0e010056 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d63019e-3977-4c21-8879-c23b0e010056.160f50c0e449a954], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d63019e-3977-4c21-8879-c23b0e010056.160f50c13f5b2675], Reason = [Created], Message = [Created container filler-pod-5d63019e-3977-4c21-8879-c23b0e010056] STEP: Considering event: Type = [Normal], Name = [filler-pod-5d63019e-3977-4c21-8879-c23b0e010056.160f50c15636e130], Reason = [Started], Message = [Started container filler-pod-5d63019e-3977-4c21-8879-c23b0e010056] STEP: Considering event: Type = [Normal], Name = [filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8.160f50c09751b2bb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7996/filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8.160f50c125e4e419], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8.160f50c15d6f5bd5], Reason = [Created], Message = [Created container filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8] STEP: Considering event: Type = [Normal], Name = [filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8.160f50c16b0b4f03], Reason = [Started], Message = [Started container filler-pod-a302a4e1-0275-4470-97f1-c2a55231b0d8] STEP: Considering event: Type = [Warning], Name = [additional-pod.160f50c1fde03981], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:26:28.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7996" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.537 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":62,"skipped":1083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:26:28.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:26:28.816: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 15 21:26:33.823: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 15 21:26:33.823: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 15 21:26:35.828: INFO: Creating deployment "test-rollover-deployment" May 15 21:26:35.841: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 15 21:26:37.848: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 15 21:26:37.853: INFO: Ensure that both replica sets have 1 created replica May 15 21:26:37.860: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 15 21:26:37.866: INFO: Updating deployment test-rollover-deployment May 15 21:26:37.866: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 15 21:26:39.875: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 15 21:26:39.882: INFO: Make sure deployment "test-rollover-deployment" is complete May 15 21:26:39.888: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:39.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174798, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:41.930: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:41.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:43.895: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:43.895: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:45.896: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:45.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:47.896: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:47.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:49.903: INFO: all replica sets need to contain the pod-template-hash label May 15 21:26:49.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174795, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:26:51.904: INFO: May 15 21:26:51.904: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 15 21:26:51.912: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4049 /apis/apps/v1/namespaces/deployment-4049/deployments/test-rollover-deployment 830ca775-2dac-451a-90d7-a21fa7cd4e37 16469113 2 2020-05-15 21:26:35 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004426658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-15 21:26:35 +0000 UTC,LastTransitionTime:2020-05-15 21:26:35 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-15 21:26:50 +0000 UTC,LastTransitionTime:2020-05-15 21:26:35 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 15 21:26:51.914: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4049 /apis/apps/v1/namespaces/deployment-4049/replicasets/test-rollover-deployment-574d6dfbff 553fe68e-93c9-4b1d-bafd-aba09fa5596d 16469103 2 2020-05-15 21:26:37 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 830ca775-2dac-451a-90d7-a21fa7cd4e37 0xc004426ab7 0xc004426ab8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004426b28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 15 21:26:51.914: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 15 21:26:51.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4049 /apis/apps/v1/namespaces/deployment-4049/replicasets/test-rollover-controller b61b8a1e-ff40-47df-842c-587613e9914e 16469112 2 2020-05-15 21:26:28 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 830ca775-2dac-451a-90d7-a21fa7cd4e37 0xc0044269cf 0xc0044269e0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004426a48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 21:26:51.914: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4049 /apis/apps/v1/namespaces/deployment-4049/replicasets/test-rollover-deployment-f6c94f66c ed826fff-011a-4a17-a28d-3cfd58481852 16469052 2 2020-05-15 21:26:35 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 830ca775-2dac-451a-90d7-a21fa7cd4e37 0xc004426b90 0xc004426b91}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004426c08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 21:26:51.917: INFO: Pod "test-rollover-deployment-574d6dfbff-szs86" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-szs86 test-rollover-deployment-574d6dfbff- deployment-4049 /api/v1/namespaces/deployment-4049/pods/test-rollover-deployment-574d6dfbff-szs86 462dd26c-a381-4b9b-adfa-a566e5794b68 16469070 0 2020-05-15 21:26:37 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 553fe68e-93c9-4b1d-bafd-aba09fa5596d 0xc0043e7b47 0xc0043e7b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h8477,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h8477,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h8477,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:26:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:26:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:26:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.22,StartTime:2020-05-15 21:26:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:26:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://c30cf9d87a95b351aabf152f3425bbb3a75bae007a37298b02b5eda7c38ea203,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:26:51.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4049" for this suite. • [SLOW TEST:23.227 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":63,"skipped":1115,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:26:51.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:26:52.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458" in namespace "projected-1320" to be "success or failure" May 15 21:26:52.020: INFO: Pod "downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458": Phase="Pending", Reason="", readiness=false. Elapsed: 18.720868ms May 15 21:26:54.024: INFO: Pod "downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022703173s May 15 21:26:56.027: INFO: Pod "downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026370342s STEP: Saw pod success May 15 21:26:56.027: INFO: Pod "downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458" satisfied condition "success or failure" May 15 21:26:56.030: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458 container client-container: STEP: delete the pod May 15 21:26:56.198: INFO: Waiting for pod downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458 to disappear May 15 21:26:56.224: INFO: Pod downwardapi-volume-c4672794-425f-4410-94dd-c2c6bbfc2458 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:26:56.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1320" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:26:56.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:02.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5751" for this suite. • [SLOW TEST:6.176 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1168,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:02.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0a3bca21-c631-4c99-812f-27933a5ba8a7 STEP: Creating a pod to test consume configMaps May 15 21:27:02.485: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e" in namespace "projected-8088" to be "success or failure" May 15 21:27:02.513: INFO: Pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e": Phase="Pending", Reason="", readiness=false. Elapsed: 28.449853ms May 15 21:27:04.525: INFO: Pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040072351s May 15 21:27:06.529: INFO: Pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e": Phase="Running", Reason="", readiness=true. Elapsed: 4.044352766s May 15 21:27:08.534: INFO: Pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048869311s STEP: Saw pod success May 15 21:27:08.534: INFO: Pod "pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e" satisfied condition "success or failure" May 15 21:27:08.538: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e container projected-configmap-volume-test: STEP: delete the pod May 15 21:27:08.572: INFO: Waiting for pod pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e to disappear May 15 21:27:08.595: INFO: Pod pod-projected-configmaps-fefaa8fe-e3a7-4146-91f6-ba44a9fbc11e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:08.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8088" for this suite. • [SLOW TEST:6.196 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1184,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:08.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 15 21:27:08.740: INFO: >>> kubeConfig: /root/.kube/config May 15 21:27:11.650: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5263" for this suite. • [SLOW TEST:12.535 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":67,"skipped":1196,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:21.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-e14bd998-295a-4a88-932c-ab6fa19d6e07 STEP: Creating a pod to test consume secrets May 15 21:27:21.492: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502" in namespace "projected-6839" to be "success or failure" May 15 21:27:21.495: INFO: Pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227128ms May 15 21:27:23.549: INFO: Pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057010321s May 15 21:27:25.553: INFO: Pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502": Phase="Running", Reason="", readiness=true. Elapsed: 4.061257742s May 15 21:27:27.558: INFO: Pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065907992s STEP: Saw pod success May 15 21:27:27.558: INFO: Pod "pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502" satisfied condition "success or failure" May 15 21:27:27.561: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502 container projected-secret-volume-test: STEP: delete the pod May 15 21:27:27.580: INFO: Waiting for pod pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502 to disappear May 15 21:27:27.606: INFO: Pod pod-projected-secrets-b0f45cf6-3779-47d8-9587-a7677dd7f502 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:27.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6839" for this suite. • [SLOW TEST:6.495 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1206,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:27.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-61624014-27be-4d2a-b37e-f981f0ea706b STEP: Creating a pod to test consume configMaps May 15 21:27:27.704: INFO: Waiting up to 5m0s for pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565" in namespace "configmap-7856" to be "success or failure" May 15 21:27:27.706: INFO: Pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155729ms May 15 21:27:29.710: INFO: Pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006258523s May 15 21:27:31.714: INFO: Pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565": Phase="Running", Reason="", readiness=true. Elapsed: 4.010477131s May 15 21:27:33.719: INFO: Pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015031853s STEP: Saw pod success May 15 21:27:33.719: INFO: Pod "pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565" satisfied condition "success or failure" May 15 21:27:33.721: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565 container configmap-volume-test: STEP: delete the pod May 15 21:27:33.790: INFO: Waiting for pod pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565 to disappear May 15 21:27:33.793: INFO: Pod pod-configmaps-4634e73d-f655-4453-870a-95ec4b14f565 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:33.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7856" for this suite. • [SLOW TEST:6.167 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1215,"failed":0} [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:33.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:27:33.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9959' May 15 21:27:34.171: INFO: stderr: "" May 15 21:27:34.171: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 15 21:27:34.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9959' May 15 21:27:34.551: INFO: stderr: "" May 15 21:27:34.551: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 21:27:35.590: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:27:35.590: INFO: Found 0 / 1 May 15 21:27:36.554: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:27:36.554: INFO: Found 0 / 1 May 15 21:27:37.555: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:27:37.555: INFO: Found 0 / 1 May 15 21:27:38.555: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:27:38.555: INFO: Found 1 / 1 May 15 21:27:38.555: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 21:27:38.558: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:27:38.558: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 21:27:38.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-n8jmb --namespace=kubectl-9959' May 15 21:27:38.672: INFO: stderr: "" May 15 21:27:38.672: INFO: stdout: "Name: agnhost-master-n8jmb\nNamespace: kubectl-9959\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Fri, 15 May 2020 21:27:34 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.26\nIPs:\n IP: 10.244.1.26\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://bb2665f8d6211ab685965ccbeec87586b9a084a238012530a44d77e0feb29655\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 15 May 2020 21:27:36 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-7khhd (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-7khhd:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-7khhd\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-9959/agnhost-master-n8jmb to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker Started container agnhost-master\n" May 15 21:27:38.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9959' May 15 21:27:38.779: INFO: stderr: "" May 15 21:27:38.779: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9959\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-n8jmb\n" May 15 21:27:38.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9959' May 15 21:27:38.910: INFO: stderr: "" May 15 21:27:38.911: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9959\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.238.41\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.26:6379\nSession Affinity: None\nEvents: \n" May 15 21:27:38.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 15 21:27:39.045: INFO: stderr: "" May 15 21:27:39.045: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Fri, 15 May 2020 21:27:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 15 May 2020 21:26:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 15 May 2020 21:26:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 15 May 2020 21:26:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 15 May 2020 21:26:45 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 61d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 61d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 61d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 61d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 15 21:27:39.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9959' May 15 21:27:39.256: INFO: stderr: "" May 15 21:27:39.256: INFO: stdout: "Name: kubectl-9959\nLabels: e2e-framework=kubectl\n e2e-run=ae7102aa-42e6-4bc2-9215-47ea6105d699\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:39.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9959" for this suite. • [SLOW TEST:5.546 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":70,"skipped":1215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:39.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:27:39.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd" in namespace "downward-api-6410" to be "success or failure" May 15 21:27:39.466: INFO: Pod "downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.684631ms May 15 21:27:41.526: INFO: Pod "downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065155054s May 15 21:27:43.538: INFO: Pod "downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077509152s STEP: Saw pod success May 15 21:27:43.538: INFO: Pod "downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd" satisfied condition "success or failure" May 15 21:27:43.541: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd container client-container: STEP: delete the pod May 15 21:27:43.587: INFO: Waiting for pod downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd to disappear May 15 21:27:43.628: INFO: Pod downwardapi-volume-9f052019-91ee-420a-a55f-068a5a6becfd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:43.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6410" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1251,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:43.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-589e4f55-34d3-4ae7-8fe3-6471644e0fc2 STEP: Creating a pod to test consume secrets May 15 21:27:43.725: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9" in namespace "projected-5135" to be "success or failure" May 15 21:27:43.730: INFO: Pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28439ms May 15 21:27:45.736: INFO: Pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010624335s May 15 21:27:47.740: INFO: Pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9": Phase="Running", Reason="", readiness=true. Elapsed: 4.01441479s May 15 21:27:49.743: INFO: Pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017351482s STEP: Saw pod success May 15 21:27:49.743: INFO: Pod "pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9" satisfied condition "success or failure" May 15 21:27:49.745: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9 container projected-secret-volume-test: STEP: delete the pod May 15 21:27:49.783: INFO: Waiting for pod pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9 to disappear May 15 21:27:49.805: INFO: Pod pod-projected-secrets-ca970e5f-3e45-410e-9691-05b326f4bbd9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:27:49.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5135" for this suite. • [SLOW TEST:6.171 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1256,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:27:49.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4842.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4842.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4842.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 21:27:56.164: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.168: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.171: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.174: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.182: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.185: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.187: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.190: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:27:56.196: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:01.199: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.202: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.204: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.206: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.214: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.217: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.219: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.221: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:01.225: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:06.200: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.204: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.207: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.210: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.219: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.222: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.225: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.227: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:06.233: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:11.201: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.205: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.207: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.211: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.219: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.221: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.224: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.226: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:11.230: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:16.200: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.203: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.207: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.209: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.217: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.219: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.222: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.225: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:16.231: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:21.201: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.205: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.208: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.212: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.220: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.223: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.225: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.228: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local from pod dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec: the server could not find the requested resource (get pods dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec) May 15 21:28:21.233: INFO: Lookups using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4842.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4842.svc.cluster.local jessie_udp@dns-test-service-2.dns-4842.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4842.svc.cluster.local] May 15 21:28:26.231: INFO: DNS probes using dns-4842/dns-test-cf1d5e9f-28d4-481f-90e7-533a12c41dec succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:28:26.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4842" for this suite. • [SLOW TEST:37.263 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":73,"skipped":1258,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:28:27.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:28:28.068: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:28:30.077: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174908, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174908, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174908, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725174907, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:28:33.107: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:28:33.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9448" for this suite. STEP: Destroying namespace "webhook-9448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.217 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":74,"skipped":1268,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:28:33.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-46b18657-29fe-4afc-b68e-c0fac445d13d STEP: Creating a pod to test consume secrets May 15 21:28:33.391: INFO: Waiting up to 5m0s for pod "pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627" in namespace "secrets-2694" to be "success or failure" May 15 21:28:33.408: INFO: Pod "pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627": Phase="Pending", Reason="", readiness=false. Elapsed: 16.471741ms May 15 21:28:35.412: INFO: Pod "pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021233686s May 15 21:28:37.462: INFO: Pod "pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070767897s STEP: Saw pod success May 15 21:28:37.462: INFO: Pod "pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627" satisfied condition "success or failure" May 15 21:28:37.464: INFO: Trying to get logs from node jerma-worker pod pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627 container secret-volume-test: STEP: delete the pod May 15 21:28:37.689: INFO: Waiting for pod pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627 to disappear May 15 21:28:37.749: INFO: Pod pod-secrets-51642b72-5bdf-4c46-8d62-ed93732f8627 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:28:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2694" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1271,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:28:37.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8324 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8324;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8324 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8324;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8324.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8324.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8324.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8324.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 171.234.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.234.171_udp@PTR;check="$$(dig +tcp +noall +answer +search 171.234.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.234.171_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8324 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8324;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8324 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8324;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8324.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8324.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8324.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8324.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8324.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8324.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 171.234.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.234.171_udp@PTR;check="$$(dig +tcp +noall +answer +search 171.234.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.234.171_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 21:28:45.969: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.972: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.975: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.978: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.981: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:45.989: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.009: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.012: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.015: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.017: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.020: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:46.044: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:28:51.049: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.056: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.063: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.070: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.086: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.088: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.090: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.094: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.098: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.100: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:51.112: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:28:56.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.054: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.057: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.060: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.063: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.066: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.069: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.092: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.095: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.098: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.104: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:28:56.129: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:29:01.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.052: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.058: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.065: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.068: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.070: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.089: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.091: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.093: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.098: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.102: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.104: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.107: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:01.123: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:29:06.096: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.100: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.104: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.108: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.111: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.113: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.115: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.118: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.140: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.142: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.145: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.148: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.151: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.154: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.156: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.159: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:06.177: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:29:11.050: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.053: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.056: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.059: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.064: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.071: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.092: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.095: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.098: INFO: Unable to read jessie_udp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324 from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.104: INFO: Unable to read jessie_udp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.110: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.113: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc from pod dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c: the server could not find the requested resource (get pods dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c) May 15 21:29:11.130: INFO: Lookups using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8324 wheezy_tcp@dns-test-service.dns-8324 wheezy_udp@dns-test-service.dns-8324.svc wheezy_tcp@dns-test-service.dns-8324.svc wheezy_udp@_http._tcp.dns-test-service.dns-8324.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8324.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8324 jessie_tcp@dns-test-service.dns-8324 jessie_udp@dns-test-service.dns-8324.svc jessie_tcp@dns-test-service.dns-8324.svc jessie_udp@_http._tcp.dns-test-service.dns-8324.svc jessie_tcp@_http._tcp.dns-test-service.dns-8324.svc] May 15 21:29:16.189: INFO: DNS probes using dns-8324/dns-test-e6403b11-74f0-46d1-a3ab-bc89a57dba4c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:17.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8324" for this suite. • [SLOW TEST:39.380 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":76,"skipped":1286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:17.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 15 21:29:17.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5248 -- logs-generator --log-lines-total 100 --run-duration 20s' May 15 21:29:17.389: INFO: stderr: "" May 15 21:29:17.389: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 15 21:29:17.389: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 15 21:29:17.389: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5248" to be "running and ready, or succeeded" May 15 21:29:17.431: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.434039ms May 15 21:29:19.435: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045637054s May 15 21:29:21.451: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.06182796s May 15 21:29:21.451: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 15 21:29:21.451: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 15 21:29:21.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248' May 15 21:29:21.573: INFO: stderr: "" May 15 21:29:21.573: INFO: stdout: "I0515 21:29:20.839819 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/867 555\nI0515 21:29:21.039946 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/d5b 343\nI0515 21:29:21.240002 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/z4q 263\nI0515 21:29:21.440162 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qp8l 351\n" May 15 21:29:23.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248' May 15 21:29:23.679: INFO: stderr: "" May 15 21:29:23.679: INFO: stdout: "I0515 21:29:20.839819 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/867 555\nI0515 21:29:21.039946 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/d5b 343\nI0515 21:29:21.240002 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/z4q 263\nI0515 21:29:21.440162 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qp8l 351\nI0515 21:29:21.639998 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/bgh 311\nI0515 21:29:21.840047 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nlj 234\nI0515 21:29:22.039999 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/wgq 283\nI0515 21:29:22.239986 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/hvrp 474\nI0515 21:29:22.439992 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/7q5b 369\nI0515 21:29:22.639991 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/55nd 283\nI0515 21:29:22.840010 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/8bv 487\nI0515 21:29:23.039979 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/2qx 534\nI0515 21:29:23.240059 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/vwc2 598\nI0515 21:29:23.440007 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/pxc 275\nI0515 21:29:23.639963 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/69f 286\n" May 15 21:29:25.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248' May 15 21:29:25.794: INFO: stderr: "" May 15 21:29:25.794: INFO: stdout: "I0515 21:29:20.839819 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/867 555\nI0515 21:29:21.039946 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/d5b 343\nI0515 21:29:21.240002 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/z4q 263\nI0515 21:29:21.440162 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qp8l 351\nI0515 21:29:21.639998 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/bgh 311\nI0515 21:29:21.840047 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nlj 234\nI0515 21:29:22.039999 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/wgq 283\nI0515 21:29:22.239986 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/hvrp 474\nI0515 21:29:22.439992 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/7q5b 369\nI0515 21:29:22.639991 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/55nd 283\nI0515 21:29:22.840010 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/8bv 487\nI0515 21:29:23.039979 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/2qx 534\nI0515 21:29:23.240059 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/vwc2 598\nI0515 21:29:23.440007 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/pxc 275\nI0515 21:29:23.639963 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/69f 286\nI0515 21:29:23.840012 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/zl7k 509\nI0515 21:29:24.040045 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/8qnr 551\nI0515 21:29:24.240001 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/fk9 299\nI0515 21:29:24.439976 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/jl8n 288\nI0515 21:29:24.640023 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/9g2r 366\nI0515 21:29:24.839985 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/hv5 498\nI0515 21:29:25.039930 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/tzl 473\nI0515 21:29:25.239986 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/9jk2 529\nI0515 21:29:25.440044 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/5k9 290\nI0515 21:29:25.639999 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/5xbl 598\n" STEP: limiting log lines May 15 21:29:25.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248 --tail=1' May 15 21:29:25.898: INFO: stderr: "" May 15 21:29:25.898: INFO: stdout: "I0515 21:29:25.839968 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/2lk 588\n" May 15 21:29:25.898: INFO: got output "I0515 21:29:25.839968 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/2lk 588\n" STEP: limiting log bytes May 15 21:29:25.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248 --limit-bytes=1' May 15 21:29:26.046: INFO: stderr: "" May 15 21:29:26.046: INFO: stdout: "I" May 15 21:29:26.046: INFO: got output "I" STEP: exposing timestamps May 15 21:29:26.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248 --tail=1 --timestamps' May 15 21:29:26.164: INFO: stderr: "" May 15 21:29:26.164: INFO: stdout: "2020-05-15T21:29:26.04018359Z I0515 21:29:26.039999 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/9xzg 536\n" May 15 21:29:26.164: INFO: got output "2020-05-15T21:29:26.04018359Z I0515 21:29:26.039999 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/9xzg 536\n" STEP: restricting to a time range May 15 21:29:28.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248 --since=1s' May 15 21:29:28.776: INFO: stderr: "" May 15 21:29:28.776: INFO: stdout: "I0515 21:29:27.839986 1 logs_generator.go:76] 35 GET /api/v1/namespaces/kube-system/pods/8dk 329\nI0515 21:29:28.040046 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/kw9 232\nI0515 21:29:28.240050 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/qbx 474\nI0515 21:29:28.440032 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/ns/pods/rn8s 260\nI0515 21:29:28.639983 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/svh 304\n" May 15 21:29:28.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5248 --since=24h' May 15 21:29:28.905: INFO: stderr: "" May 15 21:29:28.905: INFO: stdout: "I0515 21:29:20.839819 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/867 555\nI0515 21:29:21.039946 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/d5b 343\nI0515 21:29:21.240002 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/z4q 263\nI0515 21:29:21.440162 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/qp8l 351\nI0515 21:29:21.639998 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/bgh 311\nI0515 21:29:21.840047 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nlj 234\nI0515 21:29:22.039999 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/wgq 283\nI0515 21:29:22.239986 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/hvrp 474\nI0515 21:29:22.439992 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/7q5b 369\nI0515 21:29:22.639991 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/55nd 283\nI0515 21:29:22.840010 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/8bv 487\nI0515 21:29:23.039979 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/2qx 534\nI0515 21:29:23.240059 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/vwc2 598\nI0515 21:29:23.440007 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/pxc 275\nI0515 21:29:23.639963 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/69f 286\nI0515 21:29:23.840012 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/zl7k 509\nI0515 21:29:24.040045 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/8qnr 551\nI0515 21:29:24.240001 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/fk9 299\nI0515 21:29:24.439976 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/jl8n 288\nI0515 21:29:24.640023 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/9g2r 366\nI0515 21:29:24.839985 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/hv5 498\nI0515 21:29:25.039930 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/tzl 473\nI0515 21:29:25.239986 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/9jk2 529\nI0515 21:29:25.440044 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/5k9 290\nI0515 21:29:25.639999 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/5xbl 598\nI0515 21:29:25.839968 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/2lk 588\nI0515 21:29:26.039999 1 logs_generator.go:76] 26 GET /api/v1/namespaces/kube-system/pods/9xzg 536\nI0515 21:29:26.239973 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/9t6 475\nI0515 21:29:26.439957 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/6w52 322\nI0515 21:29:26.639986 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/l2gp 474\nI0515 21:29:26.839995 1 logs_generator.go:76] 30 POST /api/v1/namespaces/ns/pods/2ph 548\nI0515 21:29:27.040024 1 logs_generator.go:76] 31 POST /api/v1/namespaces/ns/pods/r6x 320\nI0515 21:29:27.240000 1 logs_generator.go:76] 32 GET /api/v1/namespaces/default/pods/qkr 499\nI0515 21:29:27.439999 1 logs_generator.go:76] 33 GET /api/v1/namespaces/default/pods/r6f 484\nI0515 21:29:27.640007 1 logs_generator.go:76] 34 POST /api/v1/namespaces/ns/pods/lmgw 442\nI0515 21:29:27.839986 1 logs_generator.go:76] 35 GET /api/v1/namespaces/kube-system/pods/8dk 329\nI0515 21:29:28.040046 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/kube-system/pods/kw9 232\nI0515 21:29:28.240050 1 logs_generator.go:76] 37 POST /api/v1/namespaces/ns/pods/qbx 474\nI0515 21:29:28.440032 1 logs_generator.go:76] 38 PUT /api/v1/namespaces/ns/pods/rn8s 260\nI0515 21:29:28.639983 1 logs_generator.go:76] 39 POST /api/v1/namespaces/default/pods/svh 304\nI0515 21:29:28.840025 1 logs_generator.go:76] 40 POST /api/v1/namespaces/ns/pods/h6h 369\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 15 21:29:28.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5248' May 15 21:29:39.252: INFO: stderr: "" May 15 21:29:39.252: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:39.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5248" for this suite. • [SLOW TEST:22.135 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":77,"skipped":1321,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:39.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:39.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-116" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":78,"skipped":1322,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:39.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 21:29:39.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2543' May 15 21:29:39.551: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 21:29:39.551: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 15 21:29:43.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2543' May 15 21:29:43.710: INFO: stderr: "" May 15 21:29:43.710: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:43.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2543" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":79,"skipped":1322,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:43.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-75e25c79-0a74-4e19-8298-a77fbe18435f STEP: Creating a pod to test consume secrets May 15 21:29:43.779: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243" in namespace "projected-8656" to be "success or failure" May 15 21:29:43.880: INFO: Pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243": Phase="Pending", Reason="", readiness=false. Elapsed: 100.835735ms May 15 21:29:46.000: INFO: Pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221232393s May 15 21:29:48.004: INFO: Pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243": Phase="Running", Reason="", readiness=true. Elapsed: 4.224965169s May 15 21:29:50.008: INFO: Pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.229326215s STEP: Saw pod success May 15 21:29:50.008: INFO: Pod "pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243" satisfied condition "success or failure" May 15 21:29:50.012: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243 container projected-secret-volume-test: STEP: delete the pod May 15 21:29:50.028: INFO: Waiting for pod pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243 to disappear May 15 21:29:50.046: INFO: Pod pod-projected-secrets-2be7cd3e-bd89-4fa7-a1c6-4864826e1243 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:50.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8656" for this suite. • [SLOW TEST:6.338 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1326,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:50.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 15 21:29:50.160: INFO: Waiting up to 5m0s for pod "pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c" in namespace "emptydir-889" to be "success or failure" May 15 21:29:50.164: INFO: Pod "pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.931083ms May 15 21:29:52.192: INFO: Pod "pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031948345s May 15 21:29:54.196: INFO: Pod "pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036325495s STEP: Saw pod success May 15 21:29:54.196: INFO: Pod "pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c" satisfied condition "success or failure" May 15 21:29:54.200: INFO: Trying to get logs from node jerma-worker2 pod pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c container test-container: STEP: delete the pod May 15 21:29:54.256: INFO: Waiting for pod pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c to disappear May 15 21:29:54.326: INFO: Pod pod-e30bb24b-b897-4c9d-8d0e-59444d4e5a0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:54.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-889" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1336,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:54.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6b7d673f-b89b-4349-b0ec-3e7df9aa0c79 STEP: Creating a pod to test consume configMaps May 15 21:29:54.526: INFO: Waiting up to 5m0s for pod "pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569" in namespace "configmap-6382" to be "success or failure" May 15 21:29:54.536: INFO: Pod "pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569": Phase="Pending", Reason="", readiness=false. Elapsed: 9.77518ms May 15 21:29:56.540: INFO: Pod "pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014231188s May 15 21:29:58.544: INFO: Pod "pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017965195s STEP: Saw pod success May 15 21:29:58.544: INFO: Pod "pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569" satisfied condition "success or failure" May 15 21:29:58.547: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569 container configmap-volume-test: STEP: delete the pod May 15 21:29:58.582: INFO: Waiting for pod pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569 to disappear May 15 21:29:58.629: INFO: Pod pod-configmaps-71aa4849-6e3f-4770-b615-a30684893569 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:58.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6382" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:58.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 15 21:29:58.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 15 21:29:58.935: INFO: stderr: "" May 15 21:29:58.935: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:29:58.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8962" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":83,"skipped":1373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:29:58.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:30:27.065: INFO: Container started at 2020-05-15 21:30:02 +0000 UTC, pod became ready at 2020-05-15 21:30:26 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:30:27.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3103" for this suite. • [SLOW TEST:28.129 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1417,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:30:27.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 21:30:27.161: INFO: Waiting up to 5m0s for pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f" in namespace "emptydir-1158" to be "success or failure" May 15 21:30:27.178: INFO: Pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.471201ms May 15 21:30:29.182: INFO: Pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020990139s May 15 21:30:31.187: INFO: Pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f": Phase="Running", Reason="", readiness=true. Elapsed: 4.02588313s May 15 21:30:33.191: INFO: Pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030249135s STEP: Saw pod success May 15 21:30:33.191: INFO: Pod "pod-9af3a5a2-f110-442a-b033-b186b97de83f" satisfied condition "success or failure" May 15 21:30:33.195: INFO: Trying to get logs from node jerma-worker pod pod-9af3a5a2-f110-442a-b033-b186b97de83f container test-container: STEP: delete the pod May 15 21:30:33.215: INFO: Waiting for pod pod-9af3a5a2-f110-442a-b033-b186b97de83f to disappear May 15 21:30:33.219: INFO: Pod pod-9af3a5a2-f110-442a-b033-b186b97de83f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:30:33.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1158" for this suite. • [SLOW TEST:6.155 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1417,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:30:33.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:30:33.630: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:30:35.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175033, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175033, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175033, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175033, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:30:38.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 15 21:30:38.792: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:30:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6253" for this suite. STEP: Destroying namespace "webhook-6253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.692 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":86,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:30:38.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 21:30:38.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7412' May 15 21:30:39.065: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 21:30:39.065: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 15 21:30:39.139: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-5vbh6] May 15 21:30:39.139: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-5vbh6" in namespace "kubectl-7412" to be "running and ready" May 15 21:30:39.148: INFO: Pod "e2e-test-httpd-rc-5vbh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.819843ms May 15 21:30:41.152: INFO: Pod "e2e-test-httpd-rc-5vbh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013075033s May 15 21:30:43.156: INFO: Pod "e2e-test-httpd-rc-5vbh6": Phase="Running", Reason="", readiness=true. Elapsed: 4.017155358s May 15 21:30:43.156: INFO: Pod "e2e-test-httpd-rc-5vbh6" satisfied condition "running and ready" May 15 21:30:43.156: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-5vbh6] May 15 21:30:43.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-7412' May 15 21:30:43.281: INFO: stderr: "" May 15 21:30:43.281: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.37. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.37. Set the 'ServerName' directive globally to suppress this message\n[Fri May 15 21:30:41.719209 2020] [mpm_event:notice] [pid 1:tid 140476609973096] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri May 15 21:30:41.719273 2020] [core:notice] [pid 1:tid 140476609973096] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 15 21:30:43.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7412' May 15 21:30:43.387: INFO: stderr: "" May 15 21:30:43.387: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:30:43.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7412" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":87,"skipped":1466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:30:43.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0515 21:30:44.552671 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 21:30:44.552: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:30:44.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9357" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":88,"skipped":1499,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:30:44.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:31:02.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-711" for this suite. • [SLOW TEST:17.488 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":89,"skipped":1505,"failed":0} SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:31:02.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d52e348d-6585-4151-831c-3785cb821073 STEP: Creating a pod to test consume secrets May 15 21:31:02.396: INFO: Waiting up to 5m0s for pod "pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8" in namespace "secrets-7159" to be "success or failure" May 15 21:31:02.522: INFO: Pod "pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 126.433532ms May 15 21:31:04.667: INFO: Pod "pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270766167s May 15 21:31:06.671: INFO: Pod "pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.274831403s STEP: Saw pod success May 15 21:31:06.671: INFO: Pod "pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8" satisfied condition "success or failure" May 15 21:31:06.674: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8 container secret-env-test: STEP: delete the pod May 15 21:31:06.887: INFO: Waiting for pod pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8 to disappear May 15 21:31:06.995: INFO: Pod pod-secrets-1c88ce79-0656-4967-9fb4-d55f4db2e6e8 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:31:06.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7159" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1508,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:31:07.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-79d559e6-c837-469c-8410-aaf2b22fd687 STEP: Creating secret with name s-test-opt-upd-2ca74b0a-c5c8-4f34-82c5-6443b5fb4f35 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-79d559e6-c837-469c-8410-aaf2b22fd687 STEP: Updating secret s-test-opt-upd-2ca74b0a-c5c8-4f34-82c5-6443b5fb4f35 STEP: Creating secret with name s-test-opt-create-5de330c2-55f2-48e8-a715-a26a21d44e7e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:32:47.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-637" for this suite. • [SLOW TEST:100.691 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1516,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:32:47.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:32:47.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7" in namespace "downward-api-3074" to be "success or failure" May 15 21:32:47.791: INFO: Pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.783744ms May 15 21:32:50.141: INFO: Pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.357054301s May 15 21:32:52.145: INFO: Pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.360465712s May 15 21:32:54.147: INFO: Pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.363007822s STEP: Saw pod success May 15 21:32:54.147: INFO: Pod "downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7" satisfied condition "success or failure" May 15 21:32:54.149: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7 container client-container: STEP: delete the pod May 15 21:32:54.485: INFO: Waiting for pod downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7 to disappear May 15 21:32:54.498: INFO: Pod downwardapi-volume-58d38c89-ec2e-4112-86ce-d7b4adeb53d7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:32:54.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3074" for this suite. • [SLOW TEST:6.808 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1569,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:32:54.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:32:55.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:32:57.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 21:32:59.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:33:02.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:03.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5341" for this suite. STEP: Destroying namespace "webhook-5341-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.835 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":93,"skipped":1583,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:03.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:10.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7129" for this suite. • [SLOW TEST:7.193 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":94,"skipped":1595,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:10.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 15 21:33:10.619: INFO: Waiting up to 5m0s for pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4" in namespace "downward-api-2517" to be "success or failure" May 15 21:33:10.622: INFO: Pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.166803ms May 15 21:33:12.626: INFO: Pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007033356s May 15 21:33:14.630: INFO: Pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011608632s May 15 21:33:16.634: INFO: Pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015544299s STEP: Saw pod success May 15 21:33:16.634: INFO: Pod "downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4" satisfied condition "success or failure" May 15 21:33:16.637: INFO: Trying to get logs from node jerma-worker pod downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4 container dapi-container: STEP: delete the pod May 15 21:33:16.754: INFO: Waiting for pod downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4 to disappear May 15 21:33:16.774: INFO: Pod downward-api-82ab32b7-b119-4196-ae6e-30f2c6e75cf4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:16.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2517" for this suite. • [SLOW TEST:6.259 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:16.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:21.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-32" for this suite. • [SLOW TEST:5.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":96,"skipped":1633,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:21.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-lcm2 STEP: Creating a pod to test atomic-volume-subpath May 15 21:33:22.036: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lcm2" in namespace "subpath-7068" to be "success or failure" May 15 21:33:22.100: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 64.201666ms May 15 21:33:24.104: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06869246s May 15 21:33:26.108: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 4.072304491s May 15 21:33:28.116: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 6.080839208s May 15 21:33:30.123: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 8.087157744s May 15 21:33:32.135: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 10.099277963s May 15 21:33:34.139: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 12.103442683s May 15 21:33:36.143: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 14.107805861s May 15 21:33:38.148: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 16.112315407s May 15 21:33:40.152: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 18.116296087s May 15 21:33:42.155: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 20.119512003s May 15 21:33:44.159: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Running", Reason="", readiness=true. Elapsed: 22.123457584s May 15 21:33:46.191: INFO: Pod "pod-subpath-test-downwardapi-lcm2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.15582655s STEP: Saw pod success May 15 21:33:46.191: INFO: Pod "pod-subpath-test-downwardapi-lcm2" satisfied condition "success or failure" May 15 21:33:46.214: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-lcm2 container test-container-subpath-downwardapi-lcm2: STEP: delete the pod May 15 21:33:46.317: INFO: Waiting for pod pod-subpath-test-downwardapi-lcm2 to disappear May 15 21:33:46.322: INFO: Pod pod-subpath-test-downwardapi-lcm2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-lcm2 May 15 21:33:46.322: INFO: Deleting pod "pod-subpath-test-downwardapi-lcm2" in namespace "subpath-7068" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:46.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7068" for this suite. • [SLOW TEST:24.400 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":97,"skipped":1634,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:46.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1459" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":98,"skipped":1640,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:50.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:54.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2870" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:54.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 15 21:33:54.902: INFO: Created pod &Pod{ObjectMeta:{dns-6396 dns-6396 /api/v1/namespaces/dns-6396/pods/dns-6396 7b101f78-1b48-4d9c-9fc3-5fb3eea16d90 16471534 0 2020-05-15 21:33:54 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pvrnw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pvrnw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pvrnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 15 21:33:58.910: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6396 PodName:dns-6396 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:33:58.910: INFO: >>> kubeConfig: /root/.kube/config I0515 21:33:58.941407 6 log.go:172] (0xc002432370) (0xc001981f40) Create stream I0515 21:33:58.941438 6 log.go:172] (0xc002432370) (0xc001981f40) Stream added, broadcasting: 1 I0515 21:33:58.943551 6 log.go:172] (0xc002432370) Reply frame received for 1 I0515 21:33:58.943588 6 log.go:172] (0xc002432370) (0xc0016a1220) Create stream I0515 21:33:58.943596 6 log.go:172] (0xc002432370) (0xc0016a1220) Stream added, broadcasting: 3 I0515 21:33:58.944439 6 log.go:172] (0xc002432370) Reply frame received for 3 I0515 21:33:58.944460 6 log.go:172] (0xc002432370) (0xc0016a15e0) Create stream I0515 21:33:58.944469 6 log.go:172] (0xc002432370) (0xc0016a15e0) Stream added, broadcasting: 5 I0515 21:33:58.945405 6 log.go:172] (0xc002432370) Reply frame received for 5 I0515 21:33:59.034275 6 log.go:172] (0xc002432370) Data frame received for 3 I0515 21:33:59.034304 6 log.go:172] (0xc0016a1220) (3) Data frame handling I0515 21:33:59.034331 6 log.go:172] (0xc0016a1220) (3) Data frame sent I0515 21:33:59.035261 6 log.go:172] (0xc002432370) Data frame received for 3 I0515 21:33:59.035286 6 log.go:172] (0xc0016a1220) (3) Data frame handling I0515 21:33:59.035352 6 log.go:172] (0xc002432370) Data frame received for 5 I0515 21:33:59.035376 6 log.go:172] (0xc0016a15e0) (5) Data frame handling I0515 21:33:59.037038 6 log.go:172] (0xc002432370) Data frame received for 1 I0515 21:33:59.037055 6 log.go:172] (0xc001981f40) (1) Data frame handling I0515 21:33:59.037062 6 log.go:172] (0xc001981f40) (1) Data frame sent I0515 21:33:59.037070 6 log.go:172] (0xc002432370) (0xc001981f40) Stream removed, broadcasting: 1 I0515 21:33:59.037323 6 log.go:172] (0xc002432370) Go away received I0515 21:33:59.037388 6 log.go:172] (0xc002432370) (0xc001981f40) Stream removed, broadcasting: 1 I0515 21:33:59.037430 6 log.go:172] (0xc002432370) (0xc0016a1220) Stream removed, broadcasting: 3 I0515 21:33:59.037448 6 log.go:172] (0xc002432370) (0xc0016a15e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 15 21:33:59.037: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6396 PodName:dns-6396 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 21:33:59.037: INFO: >>> kubeConfig: /root/.kube/config I0515 21:33:59.069552 6 log.go:172] (0xc0024329a0) (0xc001324280) Create stream I0515 21:33:59.069588 6 log.go:172] (0xc0024329a0) (0xc001324280) Stream added, broadcasting: 1 I0515 21:33:59.072032 6 log.go:172] (0xc0024329a0) Reply frame received for 1 I0515 21:33:59.072066 6 log.go:172] (0xc0024329a0) (0xc0013243c0) Create stream I0515 21:33:59.072083 6 log.go:172] (0xc0024329a0) (0xc0013243c0) Stream added, broadcasting: 3 I0515 21:33:59.072908 6 log.go:172] (0xc0024329a0) Reply frame received for 3 I0515 21:33:59.072939 6 log.go:172] (0xc0024329a0) (0xc0016a1900) Create stream I0515 21:33:59.072950 6 log.go:172] (0xc0024329a0) (0xc0016a1900) Stream added, broadcasting: 5 I0515 21:33:59.074073 6 log.go:172] (0xc0024329a0) Reply frame received for 5 I0515 21:33:59.145959 6 log.go:172] (0xc0024329a0) Data frame received for 3 I0515 21:33:59.146008 6 log.go:172] (0xc0013243c0) (3) Data frame handling I0515 21:33:59.146055 6 log.go:172] (0xc0013243c0) (3) Data frame sent I0515 21:33:59.147288 6 log.go:172] (0xc0024329a0) Data frame received for 5 I0515 21:33:59.147312 6 log.go:172] (0xc0016a1900) (5) Data frame handling I0515 21:33:59.147472 6 log.go:172] (0xc0024329a0) Data frame received for 3 I0515 21:33:59.147488 6 log.go:172] (0xc0013243c0) (3) Data frame handling I0515 21:33:59.149732 6 log.go:172] (0xc0024329a0) Data frame received for 1 I0515 21:33:59.149750 6 log.go:172] (0xc001324280) (1) Data frame handling I0515 21:33:59.149762 6 log.go:172] (0xc001324280) (1) Data frame sent I0515 21:33:59.149780 6 log.go:172] (0xc0024329a0) (0xc001324280) Stream removed, broadcasting: 1 I0515 21:33:59.149859 6 log.go:172] (0xc0024329a0) (0xc001324280) Stream removed, broadcasting: 1 I0515 21:33:59.149868 6 log.go:172] (0xc0024329a0) (0xc0013243c0) Stream removed, broadcasting: 3 I0515 21:33:59.149915 6 log.go:172] (0xc0024329a0) Go away received I0515 21:33:59.150035 6 log.go:172] (0xc0024329a0) (0xc0016a1900) Stream removed, broadcasting: 5 May 15 21:33:59.150: INFO: Deleting pod dns-6396... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:33:59.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6396" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":100,"skipped":1702,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:33:59.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:34:00.394: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:34:02.403: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:34:05.442: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:34:05.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-898" for this suite. STEP: Destroying namespace "webhook-898-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.384 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":101,"skipped":1704,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:34:05.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0515 21:34:19.399913 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 21:34:19.399: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:34:19.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3624" for this suite. • [SLOW TEST:14.053 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":102,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:34:19.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 15 21:34:19.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7381' May 15 21:34:22.743: INFO: stderr: "" May 15 21:34:22.744: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 21:34:22.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7381' May 15 21:34:22.884: INFO: stderr: "" May 15 21:34:22.884: INFO: stdout: "update-demo-nautilus-dncth update-demo-nautilus-nbptb " May 15 21:34:22.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dncth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:22.983: INFO: stderr: "" May 15 21:34:22.983: INFO: stdout: "" May 15 21:34:22.984: INFO: update-demo-nautilus-dncth is created but not running May 15 21:34:27.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7381' May 15 21:34:28.282: INFO: stderr: "" May 15 21:34:28.282: INFO: stdout: "update-demo-nautilus-dncth update-demo-nautilus-nbptb " May 15 21:34:28.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dncth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:28.370: INFO: stderr: "" May 15 21:34:28.370: INFO: stdout: "true" May 15 21:34:28.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dncth -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:28.462: INFO: stderr: "" May 15 21:34:28.462: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 21:34:28.462: INFO: validating pod update-demo-nautilus-dncth May 15 21:34:28.518: INFO: got data: { "image": "nautilus.jpg" } May 15 21:34:28.518: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 21:34:28.518: INFO: update-demo-nautilus-dncth is verified up and running May 15 21:34:28.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbptb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:28.649: INFO: stderr: "" May 15 21:34:28.649: INFO: stdout: "" May 15 21:34:28.649: INFO: update-demo-nautilus-nbptb is created but not running May 15 21:34:33.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7381' May 15 21:34:33.748: INFO: stderr: "" May 15 21:34:33.748: INFO: stdout: "update-demo-nautilus-dncth update-demo-nautilus-nbptb " May 15 21:34:33.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dncth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:33.835: INFO: stderr: "" May 15 21:34:33.835: INFO: stdout: "true" May 15 21:34:33.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dncth -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:33.933: INFO: stderr: "" May 15 21:34:33.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 21:34:33.933: INFO: validating pod update-demo-nautilus-dncth May 15 21:34:33.936: INFO: got data: { "image": "nautilus.jpg" } May 15 21:34:33.936: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 21:34:33.936: INFO: update-demo-nautilus-dncth is verified up and running May 15 21:34:33.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbptb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:34.017: INFO: stderr: "" May 15 21:34:34.017: INFO: stdout: "true" May 15 21:34:34.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nbptb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:34:34.099: INFO: stderr: "" May 15 21:34:34.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 21:34:34.099: INFO: validating pod update-demo-nautilus-nbptb May 15 21:34:34.134: INFO: got data: { "image": "nautilus.jpg" } May 15 21:34:34.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 21:34:34.134: INFO: update-demo-nautilus-nbptb is verified up and running STEP: rolling-update to new replication controller May 15 21:34:34.137: INFO: scanned /root for discovery docs: May 15 21:34:34.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7381' May 15 21:34:56.802: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 15 21:34:56.802: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 21:34:56.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7381' May 15 21:34:56.921: INFO: stderr: "" May 15 21:34:56.921: INFO: stdout: "update-demo-kitten-hmvlq update-demo-kitten-jsm5b update-demo-nautilus-dncth " STEP: Replicas for name=update-demo: expected=2 actual=3 May 15 21:35:01.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7381' May 15 21:35:02.014: INFO: stderr: "" May 15 21:35:02.014: INFO: stdout: "update-demo-kitten-hmvlq update-demo-kitten-jsm5b " May 15 21:35:02.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hmvlq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:35:02.100: INFO: stderr: "" May 15 21:35:02.100: INFO: stdout: "true" May 15 21:35:02.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hmvlq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:35:02.202: INFO: stderr: "" May 15 21:35:02.202: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 15 21:35:02.202: INFO: validating pod update-demo-kitten-hmvlq May 15 21:35:02.213: INFO: got data: { "image": "kitten.jpg" } May 15 21:35:02.213: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 15 21:35:02.213: INFO: update-demo-kitten-hmvlq is verified up and running May 15 21:35:02.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jsm5b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:35:02.314: INFO: stderr: "" May 15 21:35:02.314: INFO: stdout: "true" May 15 21:35:02.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jsm5b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7381' May 15 21:35:02.406: INFO: stderr: "" May 15 21:35:02.406: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 15 21:35:02.406: INFO: validating pod update-demo-kitten-jsm5b May 15 21:35:02.416: INFO: got data: { "image": "kitten.jpg" } May 15 21:35:02.416: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 15 21:35:02.416: INFO: update-demo-kitten-jsm5b is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:02.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7381" for this suite. • [SLOW TEST:42.748 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":103,"skipped":1736,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:02.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 15 21:35:02.505: INFO: Waiting up to 5m0s for pod "var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18" in namespace "var-expansion-6949" to be "success or failure" May 15 21:35:02.515: INFO: Pod "var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18": Phase="Pending", Reason="", readiness=false. Elapsed: 10.507914ms May 15 21:35:04.588: INFO: Pod "var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082940753s May 15 21:35:06.591: INFO: Pod "var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086433757s STEP: Saw pod success May 15 21:35:06.591: INFO: Pod "var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18" satisfied condition "success or failure" May 15 21:35:06.594: INFO: Trying to get logs from node jerma-worker pod var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18 container dapi-container: STEP: delete the pod May 15 21:35:06.673: INFO: Waiting for pod var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18 to disappear May 15 21:35:06.735: INFO: Pod var-expansion-1efc7760-55b8-4cdd-bba6-bc6a266d0a18 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:06.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6949" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1752,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:06.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:35:06.963: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:08.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4655" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":105,"skipped":1756,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:08.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-3116/configmap-test-a097b490-42d0-4e79-9fd5-4511e02aa462 STEP: Creating a pod to test consume configMaps May 15 21:35:08.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9" in namespace "configmap-3116" to be "success or failure" May 15 21:35:08.366: INFO: Pod "pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.117261ms May 15 21:35:10.443: INFO: Pod "pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094451938s May 15 21:35:12.446: INFO: Pod "pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097974545s STEP: Saw pod success May 15 21:35:12.446: INFO: Pod "pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9" satisfied condition "success or failure" May 15 21:35:12.449: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9 container env-test: STEP: delete the pod May 15 21:35:12.496: INFO: Waiting for pod pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9 to disappear May 15 21:35:12.509: INFO: Pod pod-configmaps-fb5f132b-6234-4960-ae2f-d9a4a35341d9 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:12.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3116" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1762,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:12.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 15 21:35:17.123: INFO: Successfully updated pod "annotationupdateee9f0b3c-9c63-4c0e-84db-3e7d84195588" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:19.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5061" for this suite. • [SLOW TEST:6.641 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:19.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 21:35:19.779: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 21:35:21.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175319, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175319, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175319, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725175319, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 21:35:24.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:35:24.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4138-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:26.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5693" for this suite. STEP: Destroying namespace "webhook-5693-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.016 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":108,"skipped":1788,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:26.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c5ac9e57-8b7a-4344-99a1-43d05b5e635a STEP: Creating a pod to test consume secrets May 15 21:35:26.323: INFO: Waiting up to 5m0s for pod "pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85" in namespace "secrets-7628" to be "success or failure" May 15 21:35:26.339: INFO: Pod "pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85": Phase="Pending", Reason="", readiness=false. Elapsed: 15.74276ms May 15 21:35:28.343: INFO: Pod "pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019855725s May 15 21:35:30.347: INFO: Pod "pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024049545s STEP: Saw pod success May 15 21:35:30.347: INFO: Pod "pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85" satisfied condition "success or failure" May 15 21:35:30.350: INFO: Trying to get logs from node jerma-worker pod pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85 container secret-volume-test: STEP: delete the pod May 15 21:35:30.491: INFO: Waiting for pod pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85 to disappear May 15 21:35:30.540: INFO: Pod pod-secrets-26f985b3-9c83-47ce-9731-f131224c7d85 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:30.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7628" for this suite. STEP: Destroying namespace "secret-namespace-8368" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1801,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:30.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:35:30.669: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851" in namespace "projected-9391" to be "success or failure" May 15 21:35:30.689: INFO: Pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851": Phase="Pending", Reason="", readiness=false. Elapsed: 20.067213ms May 15 21:35:32.977: INFO: Pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.307849135s May 15 21:35:34.980: INFO: Pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851": Phase="Running", Reason="", readiness=true. Elapsed: 4.310910408s May 15 21:35:36.983: INFO: Pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.314365661s STEP: Saw pod success May 15 21:35:36.984: INFO: Pod "downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851" satisfied condition "success or failure" May 15 21:35:36.986: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851 container client-container: STEP: delete the pod May 15 21:35:37.039: INFO: Waiting for pod downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851 to disappear May 15 21:35:37.055: INFO: Pod downwardapi-volume-35dd59b8-f6c1-4add-9475-13cd0ba03851 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:35:37.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9391" for this suite. • [SLOW TEST:6.508 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1814,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:35:37.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c in namespace container-probe-1293 May 15 21:35:41.339: INFO: Started pod liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c in namespace container-probe-1293 STEP: checking the pod's current state and verifying that restartCount is present May 15 21:35:41.342: INFO: Initial restart count of pod liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is 0 May 15 21:35:55.373: INFO: Restart count of pod container-probe-1293/liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is now 1 (14.030826671s elapsed) May 15 21:36:15.422: INFO: Restart count of pod container-probe-1293/liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is now 2 (34.079428039s elapsed) May 15 21:36:35.482: INFO: Restart count of pod container-probe-1293/liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is now 3 (54.139935391s elapsed) May 15 21:36:55.602: INFO: Restart count of pod container-probe-1293/liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is now 4 (1m14.259824707s elapsed) May 15 21:37:55.815: INFO: Restart count of pod container-probe-1293/liveness-63e4d97f-99b3-40fa-a56e-1cd25200737c is now 5 (2m14.472550463s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:37:55.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1293" for this suite. • [SLOW TEST:138.774 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1829,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:37:55.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 15 21:37:55.915: INFO: Waiting up to 5m0s for pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef" in namespace "containers-1138" to be "success or failure" May 15 21:37:55.944: INFO: Pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef": Phase="Pending", Reason="", readiness=false. Elapsed: 28.634832ms May 15 21:37:57.948: INFO: Pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033070597s May 15 21:37:59.952: INFO: Pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef": Phase="Running", Reason="", readiness=true. Elapsed: 4.036515217s May 15 21:38:01.955: INFO: Pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040066539s STEP: Saw pod success May 15 21:38:01.956: INFO: Pod "client-containers-476c5060-b1b9-4483-8173-d6067906adef" satisfied condition "success or failure" May 15 21:38:01.958: INFO: Trying to get logs from node jerma-worker2 pod client-containers-476c5060-b1b9-4483-8173-d6067906adef container test-container: STEP: delete the pod May 15 21:38:01.999: INFO: Waiting for pod client-containers-476c5060-b1b9-4483-8173-d6067906adef to disappear May 15 21:38:02.004: INFO: Pod client-containers-476c5060-b1b9-4483-8173-d6067906adef no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:38:02.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1138" for this suite. • [SLOW TEST:6.175 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1895,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:38:02.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:38:34.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8089" for this suite. • [SLOW TEST:32.902 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1896,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:38:34.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:38:35.010: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249" in namespace "projected-8699" to be "success or failure" May 15 21:38:35.012: INFO: Pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145892ms May 15 21:38:37.177: INFO: Pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166553774s May 15 21:38:39.180: INFO: Pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249": Phase="Running", Reason="", readiness=true. Elapsed: 4.169734595s May 15 21:38:41.184: INFO: Pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174298575s STEP: Saw pod success May 15 21:38:41.184: INFO: Pod "downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249" satisfied condition "success or failure" May 15 21:38:41.187: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249 container client-container: STEP: delete the pod May 15 21:38:41.205: INFO: Waiting for pod downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249 to disappear May 15 21:38:41.208: INFO: Pod downwardapi-volume-1326e300-2ef2-4a41-92b1-e1359c88c249 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:38:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8699" for this suite. • [SLOW TEST:6.301 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:38:41.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:38:41.276: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-73b806f6-143a-42c0-bff7-9cfb100580bd" in namespace "security-context-test-3462" to be "success or failure" May 15 21:38:41.362: INFO: Pod "alpine-nnp-false-73b806f6-143a-42c0-bff7-9cfb100580bd": Phase="Pending", Reason="", readiness=false. Elapsed: 86.11273ms May 15 21:38:43.366: INFO: Pod "alpine-nnp-false-73b806f6-143a-42c0-bff7-9cfb100580bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090145747s May 15 21:38:45.369: INFO: Pod "alpine-nnp-false-73b806f6-143a-42c0-bff7-9cfb100580bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093575848s May 15 21:38:45.369: INFO: Pod "alpine-nnp-false-73b806f6-143a-42c0-bff7-9cfb100580bd" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:38:45.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3462" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:38:45.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 15 21:38:49.662: INFO: Pod pod-hostip-0913a3a6-6449-44fc-8876-a998819c8156 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:38:49.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8406" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1954,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:38:49.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:38:49.847: INFO: Create a RollingUpdate DaemonSet May 15 21:38:49.849: INFO: Check that daemon pods launch on every node of the cluster May 15 21:38:49.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:49.867: INFO: Number of nodes with available pods: 0 May 15 21:38:49.867: INFO: Node jerma-worker is running more than one daemon pod May 15 21:38:50.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:50.873: INFO: Number of nodes with available pods: 0 May 15 21:38:50.873: INFO: Node jerma-worker is running more than one daemon pod May 15 21:38:51.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:51.874: INFO: Number of nodes with available pods: 0 May 15 21:38:51.874: INFO: Node jerma-worker is running more than one daemon pod May 15 21:38:52.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:52.948: INFO: Number of nodes with available pods: 0 May 15 21:38:52.948: INFO: Node jerma-worker is running more than one daemon pod May 15 21:38:53.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:53.874: INFO: Number of nodes with available pods: 1 May 15 21:38:53.874: INFO: Node jerma-worker is running more than one daemon pod May 15 21:38:54.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:38:54.877: INFO: Number of nodes with available pods: 2 May 15 21:38:54.877: INFO: Number of running nodes: 2, number of available pods: 2 May 15 21:38:54.877: INFO: Update the DaemonSet to trigger a rollout May 15 21:38:54.881: INFO: Updating DaemonSet daemon-set May 15 21:38:59.970: INFO: Roll back the DaemonSet before rollout is complete May 15 21:38:59.975: INFO: Updating DaemonSet daemon-set May 15 21:38:59.975: INFO: Make sure DaemonSet rollback is complete May 15 21:38:59.988: INFO: Wrong image for pod: daemon-set-8jd7b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 21:38:59.988: INFO: Pod daemon-set-8jd7b is not available May 15 21:38:59.994: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:39:00.999: INFO: Wrong image for pod: daemon-set-8jd7b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 21:39:00.999: INFO: Pod daemon-set-8jd7b is not available May 15 21:39:01.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:39:02.039: INFO: Wrong image for pod: daemon-set-8jd7b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 15 21:39:02.039: INFO: Pod daemon-set-8jd7b is not available May 15 21:39:02.044: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 21:39:02.998: INFO: Pod daemon-set-n7g67 is not available May 15 21:39:03.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1542, will wait for the garbage collector to delete the pods May 15 21:39:03.061: INFO: Deleting DaemonSet.extensions daemon-set took: 4.88831ms May 15 21:39:03.361: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.214329ms May 15 21:39:09.263: INFO: Number of nodes with available pods: 0 May 15 21:39:09.264: INFO: Number of running nodes: 0, number of available pods: 0 May 15 21:39:09.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1542/daemonsets","resourceVersion":"16473390"},"items":null} May 15 21:39:09.268: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1542/pods","resourceVersion":"16473390"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:39:09.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1542" for this suite. • [SLOW TEST:19.614 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":117,"skipped":1954,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:39:09.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:39:13.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7466" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1957,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:39:13.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:39:24.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3568" for this suite. • [SLOW TEST:11.247 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":119,"skipped":1957,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:39:24.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 15 21:39:24.714: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:39:33.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9735" for this suite. • [SLOW TEST:8.778 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":120,"skipped":1964,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:39:33.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:39:33.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7676" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":121,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:39:33.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3494, will wait for the garbage collector to delete the pods May 15 21:39:39.903: INFO: Deleting Job.batch foo took: 7.393956ms May 15 21:39:40.204: INFO: Terminating Job.batch foo pods took: 300.246427ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:40:19.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3494" for this suite. • [SLOW TEST:46.011 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":122,"skipped":2006,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:40:19.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-sjnf STEP: Creating a pod to test atomic-volume-subpath May 15 21:40:19.740: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-sjnf" in namespace "subpath-6303" to be "success or failure" May 15 21:40:19.743: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.766962ms May 15 21:40:21.748: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008144863s May 15 21:40:23.751: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 4.011811686s May 15 21:40:25.755: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 6.015819018s May 15 21:40:27.759: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 8.019888924s May 15 21:40:29.764: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 10.024511385s May 15 21:40:31.767: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 12.027372889s May 15 21:40:33.771: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 14.031204438s May 15 21:40:35.776: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 16.036019934s May 15 21:40:37.779: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 18.039849487s May 15 21:40:39.784: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 20.044562776s May 15 21:40:41.788: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Running", Reason="", readiness=true. Elapsed: 22.048459947s May 15 21:40:43.792: INFO: Pod "pod-subpath-test-configmap-sjnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052648138s STEP: Saw pod success May 15 21:40:43.792: INFO: Pod "pod-subpath-test-configmap-sjnf" satisfied condition "success or failure" May 15 21:40:43.795: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-sjnf container test-container-subpath-configmap-sjnf: STEP: delete the pod May 15 21:40:44.060: INFO: Waiting for pod pod-subpath-test-configmap-sjnf to disappear May 15 21:40:44.116: INFO: Pod pod-subpath-test-configmap-sjnf no longer exists STEP: Deleting pod pod-subpath-test-configmap-sjnf May 15 21:40:44.116: INFO: Deleting pod "pod-subpath-test-configmap-sjnf" in namespace "subpath-6303" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:40:44.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6303" for this suite. • [SLOW TEST:24.577 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":123,"skipped":2029,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:40:44.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:40:44.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1852" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":124,"skipped":2032,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:40:44.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-08d5aeef-44ed-4087-83e0-122700e3ee16 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-08d5aeef-44ed-4087-83e0-122700e3ee16 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:40:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1770" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:40:50.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-4f418003-339f-455a-8d29-2b59b2d78ba0 STEP: Creating configMap with name cm-test-opt-upd-389beeaa-a750-47fd-b499-7da4728d2d87 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4f418003-339f-455a-8d29-2b59b2d78ba0 STEP: Updating configmap cm-test-opt-upd-389beeaa-a750-47fd-b499-7da4728d2d87 STEP: Creating configMap with name cm-test-opt-create-28842369-7066-4010-aa22-d5a8b6d09b65 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:41:00.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4123" for this suite. • [SLOW TEST:10.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:41:00.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:41:00.852: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8aa4df07-d62e-4484-b036-6d38ff157fc1", Controller:(*bool)(0xc002ab6cf2), BlockOwnerDeletion:(*bool)(0xc002ab6cf3)}} May 15 21:41:00.864: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"30a23c83-dadd-405c-a1cb-476654ea0800", Controller:(*bool)(0xc000f638ba), BlockOwnerDeletion:(*bool)(0xc000f638bb)}} May 15 21:41:00.909: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a6f128dc-e059-49ae-958a-eb93d2aa4394", Controller:(*bool)(0xc002ab704a), BlockOwnerDeletion:(*bool)(0xc002ab704b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:41:05.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2692" for this suite. • [SLOW TEST:5.204 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":127,"skipped":2091,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:41:05.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 15 21:41:12.506: INFO: Successfully updated pod "adopt-release-phz5b" STEP: Checking that the Job readopts the Pod May 15 21:41:12.506: INFO: Waiting up to 15m0s for pod "adopt-release-phz5b" in namespace "job-6207" to be "adopted" May 15 21:41:12.534: INFO: Pod "adopt-release-phz5b": Phase="Running", Reason="", readiness=true. Elapsed: 27.602938ms May 15 21:41:14.537: INFO: Pod "adopt-release-phz5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.030901656s May 15 21:41:14.537: INFO: Pod "adopt-release-phz5b" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 15 21:41:15.043: INFO: Successfully updated pod "adopt-release-phz5b" STEP: Checking that the Job releases the Pod May 15 21:41:15.043: INFO: Waiting up to 15m0s for pod "adopt-release-phz5b" in namespace "job-6207" to be "released" May 15 21:41:15.111: INFO: Pod "adopt-release-phz5b": Phase="Running", Reason="", readiness=true. Elapsed: 68.168875ms May 15 21:41:17.116: INFO: Pod "adopt-release-phz5b": Phase="Running", Reason="", readiness=true. Elapsed: 2.072621584s May 15 21:41:17.116: INFO: Pod "adopt-release-phz5b" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:41:17.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6207" for this suite. • [SLOW TEST:11.223 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":128,"skipped":2104,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:41:17.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0515 21:41:58.116156 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 21:41:58.116: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:41:58.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6861" for this suite. • [SLOW TEST:40.963 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":129,"skipped":2115,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:41:58.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:41:58.186: INFO: Creating ReplicaSet my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910 May 15 21:41:58.196: INFO: Pod name my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910: Found 0 pods out of 1 May 15 21:42:03.235: INFO: Pod name my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910: Found 1 pods out of 1 May 15 21:42:03.235: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910" is running May 15 21:42:03.244: INFO: Pod "my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910-fc8xg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 21:41:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 21:42:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 21:42:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 21:41:58 +0000 UTC Reason: Message:}]) May 15 21:42:03.244: INFO: Trying to dial the pod May 15 21:42:08.660: INFO: Controller my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910: Got expected result from replica 1 [my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910-fc8xg]: "my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910-fc8xg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:42:08.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3605" for this suite. • [SLOW TEST:10.863 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":130,"skipped":2119,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:42:08.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-2ec7ddef-d638-4758-a3f4-8a25aa0b5770 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:42:09.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6954" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":131,"skipped":2120,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:42:09.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 15 21:42:09.424: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 21:42:09.490: INFO: Waiting for terminating namespaces to be deleted... May 15 21:42:09.563: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 15 21:42:09.580: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:42:09.580: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:42:09.580: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:42:09.580: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:42:09.580: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 15 21:42:09.613: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:42:09.613: INFO: Container kindnet-cni ready: true, restart count 0 May 15 21:42:09.613: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 15 21:42:09.613: INFO: Container kube-bench ready: false, restart count 0 May 15 21:42:09.613: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 21:42:09.613: INFO: Container kube-proxy ready: true, restart count 0 May 15 21:42:09.613: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 15 21:42:09.613: INFO: Container kube-hunter ready: false, restart count 0 May 15 21:42:09.613: INFO: my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910-fc8xg from replicaset-3605 started at 2020-05-15 21:41:58 +0000 UTC (1 container statuses recorded) May 15 21:42:09.613: INFO: Container my-hostname-basic-71d8ac3f-1d4c-45dc-a444-a557e623c910 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f344448e-1837-40fa-b54a-b1a4e6ab84b2 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-f344448e-1837-40fa-b54a-b1a4e6ab84b2 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f344448e-1837-40fa-b54a-b1a4e6ab84b2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:17.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3744" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.535 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":132,"skipped":2127,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:17.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 15 21:47:17.891: INFO: Waiting up to 5m0s for pod "client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037" in namespace "containers-9741" to be "success or failure" May 15 21:47:17.900: INFO: Pod "client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037": Phase="Pending", Reason="", readiness=false. Elapsed: 9.207548ms May 15 21:47:19.960: INFO: Pod "client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069553701s May 15 21:47:21.963: INFO: Pod "client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072805777s STEP: Saw pod success May 15 21:47:21.963: INFO: Pod "client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037" satisfied condition "success or failure" May 15 21:47:21.966: INFO: Trying to get logs from node jerma-worker pod client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037 container test-container: STEP: delete the pod May 15 21:47:22.106: INFO: Waiting for pod client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037 to disappear May 15 21:47:22.250: INFO: Pod client-containers-f2efaae7-e33c-49cf-904b-07b87d1ab037 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:22.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9741" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2149,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:22.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 15 21:47:22.291: INFO: namespace kubectl-5021 May 15 21:47:22.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5021' May 15 21:47:26.804: INFO: stderr: "" May 15 21:47:26.804: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 21:47:27.809: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:47:27.809: INFO: Found 0 / 1 May 15 21:47:28.951: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:47:28.951: INFO: Found 0 / 1 May 15 21:47:29.846: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:47:29.846: INFO: Found 0 / 1 May 15 21:47:30.809: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:47:30.809: INFO: Found 1 / 1 May 15 21:47:30.809: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 15 21:47:30.813: INFO: Selector matched 1 pods for map[app:agnhost] May 15 21:47:30.813: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 21:47:30.813: INFO: wait on agnhost-master startup in kubectl-5021 May 15 21:47:30.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-6l287 agnhost-master --namespace=kubectl-5021' May 15 21:47:30.924: INFO: stderr: "" May 15 21:47:30.924: INFO: stdout: "Paused\n" STEP: exposing RC May 15 21:47:30.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5021' May 15 21:47:31.136: INFO: stderr: "" May 15 21:47:31.136: INFO: stdout: "service/rm2 exposed\n" May 15 21:47:31.143: INFO: Service rm2 in namespace kubectl-5021 found. STEP: exposing service May 15 21:47:33.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5021' May 15 21:47:33.310: INFO: stderr: "" May 15 21:47:33.310: INFO: stdout: "service/rm3 exposed\n" May 15 21:47:33.313: INFO: Service rm3 in namespace kubectl-5021 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:35.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5021" for this suite. • [SLOW TEST:13.072 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":134,"skipped":2155,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:35.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 21:47:35.424: INFO: Waiting up to 5m0s for pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91" in namespace "emptydir-9530" to be "success or failure" May 15 21:47:35.439: INFO: Pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91": Phase="Pending", Reason="", readiness=false. Elapsed: 15.33175ms May 15 21:47:37.442: INFO: Pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018343996s May 15 21:47:39.451: INFO: Pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91": Phase="Running", Reason="", readiness=true. Elapsed: 4.027647213s May 15 21:47:41.470: INFO: Pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045824639s STEP: Saw pod success May 15 21:47:41.470: INFO: Pod "pod-4bfa47c9-514e-4102-b09a-6c099164ca91" satisfied condition "success or failure" May 15 21:47:41.580: INFO: Trying to get logs from node jerma-worker2 pod pod-4bfa47c9-514e-4102-b09a-6c099164ca91 container test-container: STEP: delete the pod May 15 21:47:41.626: INFO: Waiting for pod pod-4bfa47c9-514e-4102-b09a-6c099164ca91 to disappear May 15 21:47:41.641: INFO: Pod pod-4bfa47c9-514e-4102-b09a-6c099164ca91 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:41.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9530" for this suite. • [SLOW TEST:6.319 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:41.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 21:47:41.727: INFO: Waiting up to 5m0s for pod "pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0" in namespace "emptydir-7346" to be "success or failure" May 15 21:47:41.737: INFO: Pod "pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.851143ms May 15 21:47:43.873: INFO: Pod "pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146271642s May 15 21:47:45.876: INFO: Pod "pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149061576s STEP: Saw pod success May 15 21:47:45.876: INFO: Pod "pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0" satisfied condition "success or failure" May 15 21:47:45.878: INFO: Trying to get logs from node jerma-worker pod pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0 container test-container: STEP: delete the pod May 15 21:47:46.009: INFO: Waiting for pod pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0 to disappear May 15 21:47:46.084: INFO: Pod pod-d20b6cd5-a39b-4e1e-a580-dbe05fc01ea0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:46.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7346" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:46.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 15 21:47:46.248: INFO: Waiting up to 5m0s for pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001" in namespace "downward-api-6722" to be "success or failure" May 15 21:47:46.276: INFO: Pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001": Phase="Pending", Reason="", readiness=false. Elapsed: 27.883898ms May 15 21:47:48.279: INFO: Pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031038116s May 15 21:47:50.283: INFO: Pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001": Phase="Running", Reason="", readiness=true. Elapsed: 4.034915568s May 15 21:47:52.298: INFO: Pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050220743s STEP: Saw pod success May 15 21:47:52.298: INFO: Pod "downward-api-10be1e49-e901-4f2b-a217-6fb548a36001" satisfied condition "success or failure" May 15 21:47:52.301: INFO: Trying to get logs from node jerma-worker2 pod downward-api-10be1e49-e901-4f2b-a217-6fb548a36001 container dapi-container: STEP: delete the pod May 15 21:47:52.339: INFO: Waiting for pod downward-api-10be1e49-e901-4f2b-a217-6fb548a36001 to disappear May 15 21:47:52.349: INFO: Pod downward-api-10be1e49-e901-4f2b-a217-6fb548a36001 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:47:52.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6722" for this suite. • [SLOW TEST:6.265 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2238,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:47:52.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:47:52.473: INFO: Creating deployment "webserver-deployment" May 15 21:47:52.507: INFO: Waiting for observed generation 1 May 15 21:47:54.759: INFO: Waiting for all required pods to come up May 15 21:47:54.763: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 15 21:48:04.796: INFO: Waiting for deployment "webserver-deployment" to complete May 15 21:48:04.817: INFO: Updating deployment "webserver-deployment" with a non-existent image May 15 21:48:04.823: INFO: Updating deployment webserver-deployment May 15 21:48:04.823: INFO: Waiting for observed generation 2 May 15 21:48:06.915: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 15 21:48:06.917: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 15 21:48:06.919: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 15 21:48:06.996: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 15 21:48:06.996: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 15 21:48:06.999: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 15 21:48:07.003: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 15 21:48:07.003: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 15 21:48:07.007: INFO: Updating deployment webserver-deployment May 15 21:48:07.007: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 15 21:48:07.188: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 15 21:48:07.312: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 15 21:48:07.604: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7426 /apis/apps/v1/namespaces/deployment-7426/deployments/webserver-deployment b4a68568-5ae0-4a92-8e0c-6a2f418a5e8e 16475951 3 2020-05-15 21:47:52 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00260e2b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-15 21:48:05 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-15 21:48:07 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 15 21:48:07.660: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7426 /apis/apps/v1/namespaces/deployment-7426/replicasets/webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 16475938 3 2020-05-15 21:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b4a68568-5ae0-4a92-8e0c-6a2f418a5e8e 0xc00260e767 0xc00260e768}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00260e7d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 21:48:07.660: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 15 21:48:07.660: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7426 /apis/apps/v1/namespaces/deployment-7426/replicasets/webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 16475989 3 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b4a68568-5ae0-4a92-8e0c-6a2f418a5e8e 0xc00260e697 0xc00260e698}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00260e708 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 15 21:48:07.897: INFO: Pod "webserver-deployment-595b5b9587-229xh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-229xh webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-229xh 2dde86ba-8fe9-403f-a2b8-358d2ac722fb 16475950 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260ecb7 0xc00260ecb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.897: INFO: Pod "webserver-deployment-595b5b9587-557b5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-557b5 webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-557b5 d140498a-cc45-41db-ba34-b1787a1d9575 16475982 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260ede7 0xc00260ede8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-15 21:48:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.897: INFO: Pod "webserver-deployment-595b5b9587-5vbkh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5vbkh webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-5vbkh 3b0a6961-5f29-4c25-bac9-96c85cddaaa4 16475861 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260ef47 0xc00260ef48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.159,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cb613361124bc74e0d04ff9397e87cad12b4c2d24b8b5e543149ae25c76c1d07,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.898: INFO: Pod "webserver-deployment-595b5b9587-68vpp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-68vpp webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-68vpp cfb9c03d-a532-4988-9b97-e08e068091e9 16475992 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f0c7 0xc00260f0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.898: INFO: Pod "webserver-deployment-595b5b9587-7lrmt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7lrmt webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-7lrmt c79aef53-a90b-49d9-8eb0-3d313195db9f 16475860 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f1e7 0xc00260f1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.83,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2448db754453181d870c057f34934fc9ba0860c2e71ebfe48a196057cf1e79d3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.898: INFO: Pod "webserver-deployment-595b5b9587-7w4hz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7w4hz webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-7w4hz 2a8156d9-ef9f-4e1c-ad98-7752cbf13616 16475970 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f367 0xc00260f368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.898: INFO: Pod "webserver-deployment-595b5b9587-888p8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-888p8 webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-888p8 b1c13b60-f577-4ebb-a95d-3e564c7f7f94 16475826 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f487 0xc00260f488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.157,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:47:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://73cee18f7f353b2d154a2af47ef0a17c4a309ede8febe94142da1692e2377a0a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.899: INFO: Pod "webserver-deployment-595b5b9587-dcxhm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dcxhm webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-dcxhm a08237ef-9915-47e3-8778-5bd3ec6a9f14 16475802 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f607 0xc00260f608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.81,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:47:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6cc1fd2f4b9063ef0d637b18bd163cec87089ac758f10c4c748203aa6a51c4f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.899: INFO: Pod "webserver-deployment-595b5b9587-dpb6m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dpb6m webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-dpb6m 2db1ce18-f669-4ac2-af20-86222122d73e 16475984 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f787 0xc00260f788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-15 21:48:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.899: INFO: Pod "webserver-deployment-595b5b9587-lhtxh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lhtxh webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-lhtxh 7d12f35f-20b2-4966-9dce-3cbec7e77a87 16475991 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260f907 0xc00260f908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.899: INFO: Pod "webserver-deployment-595b5b9587-nc6lk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nc6lk webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-nc6lk 504f5476-7d62-4a89-a2ec-ce0682d91363 16475853 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260fa27 0xc00260fa28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.85,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://76d26b89d01df29c44e9b46e30547cfa73dad692c775d5edf89ee31517a84425,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.899: INFO: Pod "webserver-deployment-595b5b9587-rf5zr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rf5zr webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-rf5zr fcc5fd79-9d87-4c6f-a524-4681bfcc3907 16475838 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260fba7 0xc00260fba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.158,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7c00c8ff7ff31245f68f904ed5e3892f6e0155f105fd975802dfc79dea8a9033,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-scl5t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-scl5t webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-scl5t 73e82fd8-182b-4847-a316-f8992be61219 16475993 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260fd37 0xc00260fd38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-tmk7p" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tmk7p webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-tmk7p a4601be1-2e07-43a2-bea4-46fd695bd1a6 16475849 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260fe77 0xc00260fe78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.84,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3b7ef15d467a70d117299c2db8db8cde211008155613680982dd60c44656f205,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.84,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-vwlsw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vwlsw webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-vwlsw 968bd2dc-7806-407c-8f5d-63ad3bdf27ca 16475964 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc00260fff7 0xc00260fff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-wgzzs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wgzzs webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-wgzzs e88f559b-4862-4dce-bcfc-e4ee0ef76eab 16475994 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc004528117 0xc004528118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-xq5vr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xq5vr webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-xq5vr ffe00cd1-3176-4580-b6e2-b4dee67fe191 16475976 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc004528237 0xc004528238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.900: INFO: Pod "webserver-deployment-595b5b9587-xzpxx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xzpxx webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-xzpxx 0a30d6f0-b745-4ba2-95e2-6d570859ca6a 16475968 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc004528357 0xc004528358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.901: INFO: Pod "webserver-deployment-595b5b9587-z5k2j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5k2j webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-z5k2j 3d7db702-d66e-4a19-ba32-05dbddd0a3de 16475856 0 2020-05-15 21:47:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc004528477 0xc004528478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:47:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.160,StartTime:2020-05-15 21:47:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-15 21:48:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b42b2c33b39ba6a03a076acfc9aa652de253233fea5482e35de5dd0f79c3c007,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.901: INFO: Pod "webserver-deployment-595b5b9587-zkg78" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zkg78 webserver-deployment-595b5b9587- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-595b5b9587-zkg78 637d7958-f853-4d78-b171-63809d569b02 16475995 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 3b67ca22-5a18-4dab-9a51-d1cf373a4d91 0xc0045285f7 0xc0045285f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.901: INFO: Pod "webserver-deployment-c7997dcc8-2824k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2824k webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-2824k c936f405-08a4-477c-a59c-7104e611a877 16475966 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528717 0xc004528718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.901: INFO: Pod "webserver-deployment-c7997dcc8-6m7s7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6m7s7 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-6m7s7 6a22bc1b-21b9-4c49-b0a4-a2b975dde8fe 16475901 0 2020-05-15 21:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528847 0xc004528848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-15 21:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.901: INFO: Pod "webserver-deployment-c7997dcc8-8dwxf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8dwxf webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-8dwxf 952cc8e3-3d0f-4176-ba02-d0f27eb97ecd 16475996 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc0045289c7 0xc0045289c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-cbzl4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cbzl4 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-cbzl4 dfd49dbb-4d08-4572-9b6c-1df4b533de84 16475946 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528af7 0xc004528af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-cxdrp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cxdrp webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-cxdrp 7a17ed26-3e0e-433b-af4e-a44bc5bbb99f 16475924 0 2020-05-15 21:48:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528c27 0xc004528c28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-15 21:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-g94fv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g94fv webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-g94fv d847f039-7fb5-4435-867f-7b507fe3df23 16475971 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528da7 0xc004528da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-hhwgc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hhwgc webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-hhwgc b79d09cf-9fda-4b6c-995a-6c92fc0eb2e0 16475987 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004528ef7 0xc004528ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-jqcf6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jqcf6 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-jqcf6 fc6787a6-f10b-489d-a78b-e7dd332ba19f 16475990 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004529027 0xc004529028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.902: INFO: Pod "webserver-deployment-c7997dcc8-l2d59" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l2d59 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-l2d59 0a12d6fd-d158-475e-9d84-b197f11ace67 16475899 0 2020-05-15 21:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004529157 0xc004529158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-15 21:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.903: INFO: Pod "webserver-deployment-c7997dcc8-mtlh5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mtlh5 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-mtlh5 3441e24f-2e17-49ca-845c-cb97640dda67 16475927 0 2020-05-15 21:48:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc0045292d7 0xc0045292d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-15 21:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.903: INFO: Pod "webserver-deployment-c7997dcc8-nwxvf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nwxvf webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-nwxvf 858dc269-dd86-45e6-b8b7-a271d828178a 16475988 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004529457 0xc004529458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.903: INFO: Pod "webserver-deployment-c7997dcc8-psvtl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-psvtl webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-psvtl fa6e0e61-f47c-4e06-920d-2606444a749f 16476000 0 2020-05-15 21:48:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc004529587 0xc004529588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 15 21:48:07.903: INFO: Pod "webserver-deployment-c7997dcc8-rqdb9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rqdb9 webserver-deployment-c7997dcc8- deployment-7426 /api/v1/namespaces/deployment-7426/pods/webserver-deployment-c7997dcc8-rqdb9 def6e19f-dd39-4cb3-a853-87e5a86fc759 16475909 0 2020-05-15 21:48:04 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 f9834a81-9ecd-45cf-a556-656af42a10b1 0xc0045296b7 0xc0045296b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-svwgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-svwgm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-svwgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 21:48:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-15 21:48:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:48:07.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7426" for this suite. • [SLOW TEST:15.741 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":138,"skipped":2280,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:48:08.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f35c062f-c7e8-45d6-85d5-ee2bc96fe143 STEP: Creating a pod to test consume configMaps May 15 21:48:08.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237" in namespace "configmap-3863" to be "success or failure" May 15 21:48:08.972: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 146.850547ms May 15 21:48:11.047: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22188735s May 15 21:48:13.371: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 4.545619973s May 15 21:48:15.837: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 7.012213485s May 15 21:48:18.108: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282578567s May 15 21:48:20.215: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 11.390146418s May 15 21:48:22.370: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 13.54439034s May 15 21:48:24.596: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Pending", Reason="", readiness=false. Elapsed: 15.770385234s May 15 21:48:26.644: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.818476076s STEP: Saw pod success May 15 21:48:26.644: INFO: Pod "pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237" satisfied condition "success or failure" May 15 21:48:26.650: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237 container configmap-volume-test: STEP: delete the pod May 15 21:48:26.995: INFO: Waiting for pod pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237 to disappear May 15 21:48:27.043: INFO: Pod pod-configmaps-07112e29-83e8-468a-afa6-e34f9e9f4237 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:48:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3863" for this suite. • [SLOW TEST:18.994 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2280,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:48:27.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6808 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 15 21:48:28.759: INFO: Found 0 stateful pods, waiting for 3 May 15 21:48:38.888: INFO: Found 2 stateful pods, waiting for 3 May 15 21:48:48.764: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 21:48:48.764: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 21:48:48.764: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 15 21:48:48.792: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 15 21:48:58.824: INFO: Updating stateful set ss2 May 15 21:48:58.836: INFO: Waiting for Pod statefulset-6808/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 21:49:08.843: INFO: Waiting for Pod statefulset-6808/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 15 21:49:19.701: INFO: Found 2 stateful pods, waiting for 3 May 15 21:49:29.706: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 21:49:29.706: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 21:49:29.706: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 15 21:49:29.728: INFO: Updating stateful set ss2 May 15 21:49:29.765: INFO: Waiting for Pod statefulset-6808/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 21:49:39.791: INFO: Updating stateful set ss2 May 15 21:49:39.821: INFO: Waiting for StatefulSet statefulset-6808/ss2 to complete update May 15 21:49:39.821: INFO: Waiting for Pod statefulset-6808/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 21:49:49.830: INFO: Deleting all statefulset in ns statefulset-6808 May 15 21:49:49.832: INFO: Scaling statefulset ss2 to 0 May 15 21:50:09.861: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:50:09.863: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:50:09.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6808" for this suite. • [SLOW TEST:102.887 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":140,"skipped":2301,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:50:09.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:50:21.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8181" for this suite. • [SLOW TEST:11.173 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":141,"skipped":2308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:50:21.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-70d657e9-7855-4439-b8a2-cc066430c110 STEP: Creating a pod to test consume secrets May 15 21:50:21.261: INFO: Waiting up to 5m0s for pod "pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe" in namespace "secrets-7928" to be "success or failure" May 15 21:50:21.263: INFO: Pod "pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664558ms May 15 21:50:23.266: INFO: Pod "pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005298197s May 15 21:50:25.281: INFO: Pod "pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020456835s STEP: Saw pod success May 15 21:50:25.281: INFO: Pod "pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe" satisfied condition "success or failure" May 15 21:50:25.284: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe container secret-volume-test: STEP: delete the pod May 15 21:50:25.312: INFO: Waiting for pod pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe to disappear May 15 21:50:25.316: INFO: Pod pod-secrets-9d246b84-6e3e-4916-8b62-9fc3914b1ebe no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:50:25.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7928" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2336,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:50:25.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 15 21:50:25.626: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:50:40.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4967" for this suite. • [SLOW TEST:15.536 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":143,"skipped":2336,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:50:40.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 15 21:50:45.478: INFO: Successfully updated pod "labelsupdate0ea90dc8-3599-4cb6-be66-8ba5efb9ecf5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:50:49.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9163" for this suite. • [SLOW TEST:8.651 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2344,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:50:49.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3155 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3155 I0515 21:50:49.916911 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3155, replica count: 2 I0515 21:50:52.967334 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 21:50:55.967549 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 21:50:55.967: INFO: Creating new exec pod May 15 21:51:01.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3155 execpod2pbcn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 15 21:51:01.233: INFO: stderr: "I0515 21:51:01.142296 1600 log.go:172] (0xc000012840) (0xc000a7c1e0) Create stream\nI0515 21:51:01.142375 1600 log.go:172] (0xc000012840) (0xc000a7c1e0) Stream added, broadcasting: 1\nI0515 21:51:01.145354 1600 log.go:172] (0xc000012840) Reply frame received for 1\nI0515 21:51:01.145385 1600 log.go:172] (0xc000012840) (0xc000ac6140) Create stream\nI0515 21:51:01.145399 1600 log.go:172] (0xc000012840) (0xc000ac6140) Stream added, broadcasting: 3\nI0515 21:51:01.146170 1600 log.go:172] (0xc000012840) Reply frame received for 3\nI0515 21:51:01.146211 1600 log.go:172] (0xc000012840) (0xc000a7c280) Create stream\nI0515 21:51:01.146231 1600 log.go:172] (0xc000012840) (0xc000a7c280) Stream added, broadcasting: 5\nI0515 21:51:01.147140 1600 log.go:172] (0xc000012840) Reply frame received for 5\nI0515 21:51:01.227632 1600 log.go:172] (0xc000012840) Data frame received for 5\nI0515 21:51:01.227668 1600 log.go:172] (0xc000a7c280) (5) Data frame handling\nI0515 21:51:01.227695 1600 log.go:172] (0xc000a7c280) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0515 21:51:01.228052 1600 log.go:172] (0xc000012840) Data frame received for 5\nI0515 21:51:01.228081 1600 log.go:172] (0xc000a7c280) (5) Data frame handling\nI0515 21:51:01.228101 1600 log.go:172] (0xc000a7c280) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0515 21:51:01.228278 1600 log.go:172] (0xc000012840) Data frame received for 3\nI0515 21:51:01.228305 1600 log.go:172] (0xc000ac6140) (3) Data frame handling\nI0515 21:51:01.228335 1600 log.go:172] (0xc000012840) Data frame received for 5\nI0515 21:51:01.228367 1600 log.go:172] (0xc000a7c280) (5) Data frame handling\nI0515 21:51:01.229771 1600 log.go:172] (0xc000012840) Data frame received for 1\nI0515 21:51:01.229791 1600 log.go:172] (0xc000a7c1e0) (1) Data frame handling\nI0515 21:51:01.229802 1600 log.go:172] (0xc000a7c1e0) (1) Data frame sent\nI0515 21:51:01.229814 1600 log.go:172] (0xc000012840) (0xc000a7c1e0) Stream removed, broadcasting: 1\nI0515 21:51:01.229840 1600 log.go:172] (0xc000012840) Go away received\nI0515 21:51:01.230220 1600 log.go:172] (0xc000012840) (0xc000a7c1e0) Stream removed, broadcasting: 1\nI0515 21:51:01.230239 1600 log.go:172] (0xc000012840) (0xc000ac6140) Stream removed, broadcasting: 3\nI0515 21:51:01.230247 1600 log.go:172] (0xc000012840) (0xc000a7c280) Stream removed, broadcasting: 5\n" May 15 21:51:01.233: INFO: stdout: "" May 15 21:51:01.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3155 execpod2pbcn -- /bin/sh -x -c nc -zv -t -w 2 10.99.165.148 80' May 15 21:51:01.422: INFO: stderr: "I0515 21:51:01.353496 1621 log.go:172] (0xc0009ba9a0) (0xc00065fa40) Create stream\nI0515 21:51:01.353550 1621 log.go:172] (0xc0009ba9a0) (0xc00065fa40) Stream added, broadcasting: 1\nI0515 21:51:01.355729 1621 log.go:172] (0xc0009ba9a0) Reply frame received for 1\nI0515 21:51:01.355766 1621 log.go:172] (0xc0009ba9a0) (0xc000784000) Create stream\nI0515 21:51:01.355793 1621 log.go:172] (0xc0009ba9a0) (0xc000784000) Stream added, broadcasting: 3\nI0515 21:51:01.356769 1621 log.go:172] (0xc0009ba9a0) Reply frame received for 3\nI0515 21:51:01.356820 1621 log.go:172] (0xc0009ba9a0) (0xc000532000) Create stream\nI0515 21:51:01.356838 1621 log.go:172] (0xc0009ba9a0) (0xc000532000) Stream added, broadcasting: 5\nI0515 21:51:01.357911 1621 log.go:172] (0xc0009ba9a0) Reply frame received for 5\nI0515 21:51:01.416887 1621 log.go:172] (0xc0009ba9a0) Data frame received for 3\nI0515 21:51:01.416913 1621 log.go:172] (0xc000784000) (3) Data frame handling\nI0515 21:51:01.416950 1621 log.go:172] (0xc0009ba9a0) Data frame received for 5\nI0515 21:51:01.416999 1621 log.go:172] (0xc000532000) (5) Data frame handling\nI0515 21:51:01.417040 1621 log.go:172] (0xc000532000) (5) Data frame sent\nI0515 21:51:01.417069 1621 log.go:172] (0xc0009ba9a0) Data frame received for 5\n+ nc -zv -t -w 2 10.99.165.148 80\nConnection to 10.99.165.148 80 port [tcp/http] succeeded!\nI0515 21:51:01.417098 1621 log.go:172] (0xc000532000) (5) Data frame handling\nI0515 21:51:01.418183 1621 log.go:172] (0xc0009ba9a0) Data frame received for 1\nI0515 21:51:01.418191 1621 log.go:172] (0xc00065fa40) (1) Data frame handling\nI0515 21:51:01.418196 1621 log.go:172] (0xc00065fa40) (1) Data frame sent\nI0515 21:51:01.418508 1621 log.go:172] (0xc0009ba9a0) (0xc00065fa40) Stream removed, broadcasting: 1\nI0515 21:51:01.418608 1621 log.go:172] (0xc0009ba9a0) Go away received\nI0515 21:51:01.418982 1621 log.go:172] (0xc0009ba9a0) (0xc00065fa40) Stream removed, broadcasting: 1\nI0515 21:51:01.419008 1621 log.go:172] (0xc0009ba9a0) (0xc000784000) Stream removed, broadcasting: 3\nI0515 21:51:01.419020 1621 log.go:172] (0xc0009ba9a0) (0xc000532000) Stream removed, broadcasting: 5\n" May 15 21:51:01.422: INFO: stdout: "" May 15 21:51:01.422: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:51:01.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3155" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.143 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":145,"skipped":2354,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:51:01.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 21:51:01.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 15 21:51:02.503: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:02Z generation:1 name:name1 resourceVersion:16477288 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75267f4c-d26f-4bd4-a210-7531d8f4d302] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 15 21:51:12.508: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:12Z generation:1 name:name2 resourceVersion:16477343 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:95cea24e-09cc-43f1-b766-94b0480af27e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 15 21:51:22.514: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:02Z generation:2 name:name1 resourceVersion:16477373 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75267f4c-d26f-4bd4-a210-7531d8f4d302] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 15 21:51:32.519: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:12Z generation:2 name:name2 resourceVersion:16477403 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:95cea24e-09cc-43f1-b766-94b0480af27e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 15 21:51:42.527: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:02Z generation:2 name:name1 resourceVersion:16477434 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:75267f4c-d26f-4bd4-a210-7531d8f4d302] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 15 21:51:52.535: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-15T21:51:12Z generation:2 name:name2 resourceVersion:16477465 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:95cea24e-09cc-43f1-b766-94b0480af27e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:52:03.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9877" for this suite. • [SLOW TEST:61.404 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":146,"skipped":2367,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:52:03.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 21:52:03.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838" in namespace "projected-8803" to be "success or failure" May 15 21:52:03.158: INFO: Pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838": Phase="Pending", Reason="", readiness=false. Elapsed: 22.690898ms May 15 21:52:05.350: INFO: Pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214242879s May 15 21:52:07.354: INFO: Pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838": Phase="Running", Reason="", readiness=true. Elapsed: 4.218634817s May 15 21:52:09.359: INFO: Pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223013097s STEP: Saw pod success May 15 21:52:09.359: INFO: Pod "downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838" satisfied condition "success or failure" May 15 21:52:09.362: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838 container client-container: STEP: delete the pod May 15 21:52:09.410: INFO: Waiting for pod downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838 to disappear May 15 21:52:09.415: INFO: Pod downwardapi-volume-2297b4ed-b80c-4378-ba69-bc748889c838 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:52:09.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8803" for this suite. • [SLOW TEST:6.365 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2372,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:52:09.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:52:13.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1324" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2373,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:52:13.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 15 21:52:13.620: INFO: Waiting up to 5m0s for pod "var-expansion-9e745abc-7935-4fef-afed-715fa9da5606" in namespace "var-expansion-7784" to be "success or failure" May 15 21:52:13.637: INFO: Pod "var-expansion-9e745abc-7935-4fef-afed-715fa9da5606": Phase="Pending", Reason="", readiness=false. Elapsed: 16.743956ms May 15 21:52:15.641: INFO: Pod "var-expansion-9e745abc-7935-4fef-afed-715fa9da5606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021280801s May 15 21:52:17.646: INFO: Pod "var-expansion-9e745abc-7935-4fef-afed-715fa9da5606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025562696s STEP: Saw pod success May 15 21:52:17.646: INFO: Pod "var-expansion-9e745abc-7935-4fef-afed-715fa9da5606" satisfied condition "success or failure" May 15 21:52:17.648: INFO: Trying to get logs from node jerma-worker pod var-expansion-9e745abc-7935-4fef-afed-715fa9da5606 container dapi-container: STEP: delete the pod May 15 21:52:17.668: INFO: Waiting for pod var-expansion-9e745abc-7935-4fef-afed-715fa9da5606 to disappear May 15 21:52:17.679: INFO: Pod var-expansion-9e745abc-7935-4fef-afed-715fa9da5606 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:52:17.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7784" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2380,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:52:17.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 15 21:52:18.016: INFO: PodSpec: initContainers in spec.initContainers May 15 21:53:06.512: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-aaf115c8-6097-4d48-b855-d563f83e3a9e", GenerateName:"", Namespace:"init-container-5582", SelfLink:"/api/v1/namespaces/init-container-5582/pods/pod-init-aaf115c8-6097-4d48-b855-d563f83e3a9e", UID:"1f30de04-3e32-42c6-b8d8-86668ec8b9c3", ResourceVersion:"16477792", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725176338, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"16486044"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nvph7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0030be000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nvph7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nvph7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nvph7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003e3e078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021e4120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e3e110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003e3e130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003e3e138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003e3e13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176338, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176338, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176338, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176338, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.180", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.180"}}, StartTime:(*v1.Time)(0xc0035120c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003512180), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019520e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://81feb5bf25762905e4a340ef00e72db814e4dd3fc29f6328d7dfec8590870c7d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0035121e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003512120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003e3e1bf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:53:06.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5582" for this suite. • [SLOW TEST:48.887 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":150,"skipped":2400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:53:06.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8013 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-8013 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8013 May 15 21:53:06.811: INFO: Found 0 stateful pods, waiting for 1 May 15 21:53:16.814: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 15 21:53:16.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:53:17.059: INFO: stderr: "I0515 21:53:16.939711 1642 log.go:172] (0xc0009654a0) (0xc000a84a00) Create stream\nI0515 21:53:16.939865 1642 log.go:172] (0xc0009654a0) (0xc000a84a00) Stream added, broadcasting: 1\nI0515 21:53:16.944131 1642 log.go:172] (0xc0009654a0) Reply frame received for 1\nI0515 21:53:16.944181 1642 log.go:172] (0xc0009654a0) (0xc000696780) Create stream\nI0515 21:53:16.944203 1642 log.go:172] (0xc0009654a0) (0xc000696780) Stream added, broadcasting: 3\nI0515 21:53:16.945297 1642 log.go:172] (0xc0009654a0) Reply frame received for 3\nI0515 21:53:16.945344 1642 log.go:172] (0xc0009654a0) (0xc0003df540) Create stream\nI0515 21:53:16.945373 1642 log.go:172] (0xc0009654a0) (0xc0003df540) Stream added, broadcasting: 5\nI0515 21:53:16.946130 1642 log.go:172] (0xc0009654a0) Reply frame received for 5\nI0515 21:53:17.027888 1642 log.go:172] (0xc0009654a0) Data frame received for 5\nI0515 21:53:17.027902 1642 log.go:172] (0xc0003df540) (5) Data frame handling\nI0515 21:53:17.027912 1642 log.go:172] (0xc0003df540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:53:17.053473 1642 log.go:172] (0xc0009654a0) Data frame received for 3\nI0515 21:53:17.053490 1642 log.go:172] (0xc000696780) (3) Data frame handling\nI0515 21:53:17.053497 1642 log.go:172] (0xc000696780) (3) Data frame sent\nI0515 21:53:17.053502 1642 log.go:172] (0xc0009654a0) Data frame received for 3\nI0515 21:53:17.053506 1642 log.go:172] (0xc000696780) (3) Data frame handling\nI0515 21:53:17.053623 1642 log.go:172] (0xc0009654a0) Data frame received for 5\nI0515 21:53:17.053631 1642 log.go:172] (0xc0003df540) (5) Data frame handling\nI0515 21:53:17.054886 1642 log.go:172] (0xc0009654a0) Data frame received for 1\nI0515 21:53:17.054923 1642 log.go:172] (0xc000a84a00) (1) Data frame handling\nI0515 21:53:17.054939 1642 log.go:172] (0xc000a84a00) (1) Data frame sent\nI0515 21:53:17.054948 1642 log.go:172] (0xc0009654a0) (0xc000a84a00) Stream removed, broadcasting: 1\nI0515 21:53:17.055020 1642 log.go:172] (0xc0009654a0) Go away received\nI0515 21:53:17.055203 1642 log.go:172] (0xc0009654a0) (0xc000a84a00) Stream removed, broadcasting: 1\nI0515 21:53:17.055216 1642 log.go:172] (0xc0009654a0) (0xc000696780) Stream removed, broadcasting: 3\nI0515 21:53:17.055227 1642 log.go:172] (0xc0009654a0) (0xc0003df540) Stream removed, broadcasting: 5\n" May 15 21:53:17.059: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:53:17.059: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:53:17.063: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 15 21:53:27.067: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 21:53:27.067: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:53:27.083: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:27.083: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:27.083: INFO: ss-1 Pending [] May 15 21:53:27.083: INFO: May 15 21:53:27.083: INFO: StatefulSet ss has not reached scale 3, at 2 May 15 21:53:28.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994603159s May 15 21:53:29.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990560802s May 15 21:53:30.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.803844319s May 15 21:53:31.549: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.534341061s May 15 21:53:32.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.528694089s May 15 21:53:33.560: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.52419325s May 15 21:53:34.565: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.517822195s May 15 21:53:35.570: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.512603679s May 15 21:53:36.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 507.701816ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8013 May 15 21:53:37.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:53:37.780: INFO: stderr: "I0515 21:53:37.706276 1661 log.go:172] (0xc000baf1e0) (0xc0009ba460) Create stream\nI0515 21:53:37.706395 1661 log.go:172] (0xc000baf1e0) (0xc0009ba460) Stream added, broadcasting: 1\nI0515 21:53:37.711201 1661 log.go:172] (0xc000baf1e0) Reply frame received for 1\nI0515 21:53:37.711240 1661 log.go:172] (0xc000baf1e0) (0xc0006b0640) Create stream\nI0515 21:53:37.711250 1661 log.go:172] (0xc000baf1e0) (0xc0006b0640) Stream added, broadcasting: 3\nI0515 21:53:37.712406 1661 log.go:172] (0xc000baf1e0) Reply frame received for 3\nI0515 21:53:37.712444 1661 log.go:172] (0xc000baf1e0) (0xc0005a7400) Create stream\nI0515 21:53:37.712458 1661 log.go:172] (0xc000baf1e0) (0xc0005a7400) Stream added, broadcasting: 5\nI0515 21:53:37.713407 1661 log.go:172] (0xc000baf1e0) Reply frame received for 5\nI0515 21:53:37.776212 1661 log.go:172] (0xc000baf1e0) Data frame received for 5\nI0515 21:53:37.776255 1661 log.go:172] (0xc0005a7400) (5) Data frame handling\nI0515 21:53:37.776288 1661 log.go:172] (0xc0005a7400) (5) Data frame sent\nI0515 21:53:37.776325 1661 log.go:172] (0xc000baf1e0) Data frame received for 5\nI0515 21:53:37.776373 1661 log.go:172] (0xc0005a7400) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 21:53:37.776430 1661 log.go:172] (0xc000baf1e0) Data frame received for 3\nI0515 21:53:37.776479 1661 log.go:172] (0xc0006b0640) (3) Data frame handling\nI0515 21:53:37.776531 1661 log.go:172] (0xc0006b0640) (3) Data frame sent\nI0515 21:53:37.776549 1661 log.go:172] (0xc000baf1e0) Data frame received for 3\nI0515 21:53:37.776560 1661 log.go:172] (0xc0006b0640) (3) Data frame handling\nI0515 21:53:37.777275 1661 log.go:172] (0xc000baf1e0) Data frame received for 1\nI0515 21:53:37.777299 1661 log.go:172] (0xc0009ba460) (1) Data frame handling\nI0515 21:53:37.777315 1661 log.go:172] (0xc0009ba460) (1) Data frame sent\nI0515 21:53:37.777705 1661 log.go:172] (0xc000baf1e0) (0xc0009ba460) Stream removed, broadcasting: 1\nI0515 21:53:37.777757 1661 log.go:172] (0xc000baf1e0) Go away received\nI0515 21:53:37.777931 1661 log.go:172] (0xc000baf1e0) (0xc0009ba460) Stream removed, broadcasting: 1\nI0515 21:53:37.777942 1661 log.go:172] (0xc000baf1e0) (0xc0006b0640) Stream removed, broadcasting: 3\nI0515 21:53:37.777947 1661 log.go:172] (0xc000baf1e0) (0xc0005a7400) Stream removed, broadcasting: 5\n" May 15 21:53:37.780: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:53:37.780: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:53:37.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:53:38.000: INFO: stderr: "I0515 21:53:37.908780 1681 log.go:172] (0xc000aca790) (0xc000a1a460) Create stream\nI0515 21:53:37.908830 1681 log.go:172] (0xc000aca790) (0xc000a1a460) Stream added, broadcasting: 1\nI0515 21:53:37.916843 1681 log.go:172] (0xc000aca790) Reply frame received for 1\nI0515 21:53:37.916895 1681 log.go:172] (0xc000aca790) (0xc00050a6e0) Create stream\nI0515 21:53:37.916909 1681 log.go:172] (0xc000aca790) (0xc00050a6e0) Stream added, broadcasting: 3\nI0515 21:53:37.918907 1681 log.go:172] (0xc000aca790) Reply frame received for 3\nI0515 21:53:37.918949 1681 log.go:172] (0xc000aca790) (0xc0001374a0) Create stream\nI0515 21:53:37.918963 1681 log.go:172] (0xc000aca790) (0xc0001374a0) Stream added, broadcasting: 5\nI0515 21:53:37.920073 1681 log.go:172] (0xc000aca790) Reply frame received for 5\nI0515 21:53:37.995049 1681 log.go:172] (0xc000aca790) Data frame received for 3\nI0515 21:53:37.995074 1681 log.go:172] (0xc00050a6e0) (3) Data frame handling\nI0515 21:53:37.995110 1681 log.go:172] (0xc000aca790) Data frame received for 5\nI0515 21:53:37.995148 1681 log.go:172] (0xc0001374a0) (5) Data frame handling\nI0515 21:53:37.995165 1681 log.go:172] (0xc0001374a0) (5) Data frame sent\nI0515 21:53:37.995177 1681 log.go:172] (0xc000aca790) Data frame received for 5\nI0515 21:53:37.995188 1681 log.go:172] (0xc0001374a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0515 21:53:37.995217 1681 log.go:172] (0xc00050a6e0) (3) Data frame sent\nI0515 21:53:37.995241 1681 log.go:172] (0xc000aca790) Data frame received for 3\nI0515 21:53:37.995252 1681 log.go:172] (0xc00050a6e0) (3) Data frame handling\nI0515 21:53:37.996499 1681 log.go:172] (0xc000aca790) Data frame received for 1\nI0515 21:53:37.996526 1681 log.go:172] (0xc000a1a460) (1) Data frame handling\nI0515 21:53:37.996541 1681 log.go:172] (0xc000a1a460) (1) Data frame sent\nI0515 21:53:37.996554 1681 log.go:172] (0xc000aca790) (0xc000a1a460) Stream removed, broadcasting: 1\nI0515 21:53:37.996608 1681 log.go:172] (0xc000aca790) Go away received\nI0515 21:53:37.996788 1681 log.go:172] (0xc000aca790) (0xc000a1a460) Stream removed, broadcasting: 1\nI0515 21:53:37.996799 1681 log.go:172] (0xc000aca790) (0xc00050a6e0) Stream removed, broadcasting: 3\nI0515 21:53:37.996805 1681 log.go:172] (0xc000aca790) (0xc0001374a0) Stream removed, broadcasting: 5\n" May 15 21:53:38.001: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:53:38.001: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:53:38.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:53:38.210: INFO: stderr: "I0515 21:53:38.135780 1703 log.go:172] (0xc000658a50) (0xc00090c140) Create stream\nI0515 21:53:38.135858 1703 log.go:172] (0xc000658a50) (0xc00090c140) Stream added, broadcasting: 1\nI0515 21:53:38.138935 1703 log.go:172] (0xc000658a50) Reply frame received for 1\nI0515 21:53:38.138979 1703 log.go:172] (0xc000658a50) (0xc000663c20) Create stream\nI0515 21:53:38.138992 1703 log.go:172] (0xc000658a50) (0xc000663c20) Stream added, broadcasting: 3\nI0515 21:53:38.140027 1703 log.go:172] (0xc000658a50) Reply frame received for 3\nI0515 21:53:38.140079 1703 log.go:172] (0xc000658a50) (0xc00090c1e0) Create stream\nI0515 21:53:38.140092 1703 log.go:172] (0xc000658a50) (0xc00090c1e0) Stream added, broadcasting: 5\nI0515 21:53:38.141328 1703 log.go:172] (0xc000658a50) Reply frame received for 5\nI0515 21:53:38.203400 1703 log.go:172] (0xc000658a50) Data frame received for 3\nI0515 21:53:38.203438 1703 log.go:172] (0xc000663c20) (3) Data frame handling\nI0515 21:53:38.203466 1703 log.go:172] (0xc000663c20) (3) Data frame sent\nI0515 21:53:38.203488 1703 log.go:172] (0xc000658a50) Data frame received for 3\nI0515 21:53:38.203506 1703 log.go:172] (0xc000663c20) (3) Data frame handling\nI0515 21:53:38.203533 1703 log.go:172] (0xc000658a50) Data frame received for 5\nI0515 21:53:38.203552 1703 log.go:172] (0xc00090c1e0) (5) Data frame handling\nI0515 21:53:38.203572 1703 log.go:172] (0xc00090c1e0) (5) Data frame sent\nI0515 21:53:38.203596 1703 log.go:172] (0xc000658a50) Data frame received for 5\nI0515 21:53:38.203619 1703 log.go:172] (0xc00090c1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0515 21:53:38.205525 1703 log.go:172] (0xc000658a50) Data frame received for 1\nI0515 21:53:38.205557 1703 log.go:172] (0xc00090c140) (1) Data frame handling\nI0515 21:53:38.205586 1703 log.go:172] (0xc00090c140) (1) Data frame sent\nI0515 21:53:38.205611 1703 log.go:172] (0xc000658a50) (0xc00090c140) Stream removed, broadcasting: 1\nI0515 21:53:38.205642 1703 log.go:172] (0xc000658a50) Go away received\nI0515 21:53:38.206157 1703 log.go:172] (0xc000658a50) (0xc00090c140) Stream removed, broadcasting: 1\nI0515 21:53:38.206180 1703 log.go:172] (0xc000658a50) (0xc000663c20) Stream removed, broadcasting: 3\nI0515 21:53:38.206195 1703 log.go:172] (0xc000658a50) (0xc00090c1e0) Stream removed, broadcasting: 5\n" May 15 21:53:38.211: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 21:53:38.211: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 21:53:38.215: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 15 21:53:38.215: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 15 21:53:38.215: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 15 21:53:38.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:53:38.431: INFO: stderr: "I0515 21:53:38.356987 1726 log.go:172] (0xc000554dc0) (0xc00069fae0) Create stream\nI0515 21:53:38.357054 1726 log.go:172] (0xc000554dc0) (0xc00069fae0) Stream added, broadcasting: 1\nI0515 21:53:38.359596 1726 log.go:172] (0xc000554dc0) Reply frame received for 1\nI0515 21:53:38.359672 1726 log.go:172] (0xc000554dc0) (0xc000798000) Create stream\nI0515 21:53:38.359689 1726 log.go:172] (0xc000554dc0) (0xc000798000) Stream added, broadcasting: 3\nI0515 21:53:38.360696 1726 log.go:172] (0xc000554dc0) Reply frame received for 3\nI0515 21:53:38.360752 1726 log.go:172] (0xc000554dc0) (0xc0002e2000) Create stream\nI0515 21:53:38.360773 1726 log.go:172] (0xc000554dc0) (0xc0002e2000) Stream added, broadcasting: 5\nI0515 21:53:38.362056 1726 log.go:172] (0xc000554dc0) Reply frame received for 5\nI0515 21:53:38.424734 1726 log.go:172] (0xc000554dc0) Data frame received for 5\nI0515 21:53:38.424763 1726 log.go:172] (0xc0002e2000) (5) Data frame handling\nI0515 21:53:38.424776 1726 log.go:172] (0xc0002e2000) (5) Data frame sent\nI0515 21:53:38.424786 1726 log.go:172] (0xc000554dc0) Data frame received for 5\nI0515 21:53:38.424795 1726 log.go:172] (0xc0002e2000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:53:38.424828 1726 log.go:172] (0xc000554dc0) Data frame received for 3\nI0515 21:53:38.424854 1726 log.go:172] (0xc000798000) (3) Data frame handling\nI0515 21:53:38.424870 1726 log.go:172] (0xc000798000) (3) Data frame sent\nI0515 21:53:38.424880 1726 log.go:172] (0xc000554dc0) Data frame received for 3\nI0515 21:53:38.424888 1726 log.go:172] (0xc000798000) (3) Data frame handling\nI0515 21:53:38.426562 1726 log.go:172] (0xc000554dc0) Data frame received for 1\nI0515 21:53:38.426587 1726 log.go:172] (0xc00069fae0) (1) Data frame handling\nI0515 21:53:38.426616 1726 log.go:172] (0xc00069fae0) (1) Data frame sent\nI0515 21:53:38.426637 1726 log.go:172] (0xc000554dc0) (0xc00069fae0) Stream removed, broadcasting: 1\nI0515 21:53:38.426661 1726 log.go:172] (0xc000554dc0) Go away received\nI0515 21:53:38.426969 1726 log.go:172] (0xc000554dc0) (0xc00069fae0) Stream removed, broadcasting: 1\nI0515 21:53:38.426983 1726 log.go:172] (0xc000554dc0) (0xc000798000) Stream removed, broadcasting: 3\nI0515 21:53:38.426992 1726 log.go:172] (0xc000554dc0) (0xc0002e2000) Stream removed, broadcasting: 5\n" May 15 21:53:38.431: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:53:38.431: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:53:38.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:53:38.673: INFO: stderr: "I0515 21:53:38.572604 1747 log.go:172] (0xc000a16a50) (0xc000936000) Create stream\nI0515 21:53:38.572677 1747 log.go:172] (0xc000a16a50) (0xc000936000) Stream added, broadcasting: 1\nI0515 21:53:38.575000 1747 log.go:172] (0xc000a16a50) Reply frame received for 1\nI0515 21:53:38.575034 1747 log.go:172] (0xc000a16a50) (0xc0006b5b80) Create stream\nI0515 21:53:38.575045 1747 log.go:172] (0xc000a16a50) (0xc0006b5b80) Stream added, broadcasting: 3\nI0515 21:53:38.575865 1747 log.go:172] (0xc000a16a50) Reply frame received for 3\nI0515 21:53:38.575899 1747 log.go:172] (0xc000a16a50) (0xc0009360a0) Create stream\nI0515 21:53:38.575914 1747 log.go:172] (0xc000a16a50) (0xc0009360a0) Stream added, broadcasting: 5\nI0515 21:53:38.576615 1747 log.go:172] (0xc000a16a50) Reply frame received for 5\nI0515 21:53:38.628101 1747 log.go:172] (0xc000a16a50) Data frame received for 5\nI0515 21:53:38.628125 1747 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0515 21:53:38.628140 1747 log.go:172] (0xc0009360a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:53:38.665754 1747 log.go:172] (0xc000a16a50) Data frame received for 5\nI0515 21:53:38.665810 1747 log.go:172] (0xc0009360a0) (5) Data frame handling\nI0515 21:53:38.665840 1747 log.go:172] (0xc000a16a50) Data frame received for 3\nI0515 21:53:38.665851 1747 log.go:172] (0xc0006b5b80) (3) Data frame handling\nI0515 21:53:38.665865 1747 log.go:172] (0xc0006b5b80) (3) Data frame sent\nI0515 21:53:38.666012 1747 log.go:172] (0xc000a16a50) Data frame received for 3\nI0515 21:53:38.666040 1747 log.go:172] (0xc0006b5b80) (3) Data frame handling\nI0515 21:53:38.667483 1747 log.go:172] (0xc000a16a50) Data frame received for 1\nI0515 21:53:38.667541 1747 log.go:172] (0xc000936000) (1) Data frame handling\nI0515 21:53:38.667576 1747 log.go:172] (0xc000936000) (1) Data frame sent\nI0515 21:53:38.667597 1747 log.go:172] (0xc000a16a50) (0xc000936000) Stream removed, broadcasting: 1\nI0515 21:53:38.667618 1747 log.go:172] (0xc000a16a50) Go away received\nI0515 21:53:38.668095 1747 log.go:172] (0xc000a16a50) (0xc000936000) Stream removed, broadcasting: 1\nI0515 21:53:38.668120 1747 log.go:172] (0xc000a16a50) (0xc0006b5b80) Stream removed, broadcasting: 3\nI0515 21:53:38.668133 1747 log.go:172] (0xc000a16a50) (0xc0009360a0) Stream removed, broadcasting: 5\n" May 15 21:53:38.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:53:38.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:53:38.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 21:53:38.926: INFO: stderr: "I0515 21:53:38.797852 1771 log.go:172] (0xc000a7c000) (0xc000772000) Create stream\nI0515 21:53:38.797915 1771 log.go:172] (0xc000a7c000) (0xc000772000) Stream added, broadcasting: 1\nI0515 21:53:38.799528 1771 log.go:172] (0xc000a7c000) Reply frame received for 1\nI0515 21:53:38.799586 1771 log.go:172] (0xc000a7c000) (0xc0007720a0) Create stream\nI0515 21:53:38.799609 1771 log.go:172] (0xc000a7c000) (0xc0007720a0) Stream added, broadcasting: 3\nI0515 21:53:38.800475 1771 log.go:172] (0xc000a7c000) Reply frame received for 3\nI0515 21:53:38.800527 1771 log.go:172] (0xc000a7c000) (0xc000852000) Create stream\nI0515 21:53:38.800554 1771 log.go:172] (0xc000a7c000) (0xc000852000) Stream added, broadcasting: 5\nI0515 21:53:38.801520 1771 log.go:172] (0xc000a7c000) Reply frame received for 5\nI0515 21:53:38.853616 1771 log.go:172] (0xc000a7c000) Data frame received for 5\nI0515 21:53:38.853725 1771 log.go:172] (0xc000852000) (5) Data frame handling\nI0515 21:53:38.853756 1771 log.go:172] (0xc000852000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 21:53:38.916506 1771 log.go:172] (0xc000a7c000) Data frame received for 3\nI0515 21:53:38.916533 1771 log.go:172] (0xc0007720a0) (3) Data frame handling\nI0515 21:53:38.916541 1771 log.go:172] (0xc0007720a0) (3) Data frame sent\nI0515 21:53:38.916546 1771 log.go:172] (0xc000a7c000) Data frame received for 3\nI0515 21:53:38.916576 1771 log.go:172] (0xc000a7c000) Data frame received for 5\nI0515 21:53:38.916624 1771 log.go:172] (0xc000852000) (5) Data frame handling\nI0515 21:53:38.916654 1771 log.go:172] (0xc0007720a0) (3) Data frame handling\nI0515 21:53:38.918771 1771 log.go:172] (0xc000a7c000) Data frame received for 1\nI0515 21:53:38.918807 1771 log.go:172] (0xc000772000) (1) Data frame handling\nI0515 21:53:38.918829 1771 log.go:172] (0xc000772000) (1) Data frame sent\nI0515 21:53:38.918856 1771 log.go:172] (0xc000a7c000) (0xc000772000) Stream removed, broadcasting: 1\nI0515 21:53:38.918881 1771 log.go:172] (0xc000a7c000) Go away received\nI0515 21:53:38.919470 1771 log.go:172] (0xc000a7c000) (0xc000772000) Stream removed, broadcasting: 1\nI0515 21:53:38.919584 1771 log.go:172] (0xc000a7c000) (0xc0007720a0) Stream removed, broadcasting: 3\nI0515 21:53:38.919613 1771 log.go:172] (0xc000a7c000) (0xc000852000) Stream removed, broadcasting: 5\n" May 15 21:53:38.926: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 21:53:38.926: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 21:53:38.926: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:53:38.929: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 15 21:53:48.938: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 15 21:53:48.938: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 15 21:53:48.938: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 15 21:53:48.951: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:48.951: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:48.951: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:48.951: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:48.951: INFO: May 15 21:53:48.951: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 21:53:49.956: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:49.956: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:49.956: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:49.956: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:49.956: INFO: May 15 21:53:49.956: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 21:53:51.022: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:51.022: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:51.022: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:51.022: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:51.022: INFO: May 15 21:53:51.022: INFO: StatefulSet ss has not reached scale 0, at 3 May 15 21:53:52.028: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:52.028: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:52.028: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:52.028: INFO: May 15 21:53:52.028: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:53.032: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:53.032: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:53.032: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:53.032: INFO: May 15 21:53:53.032: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:54.036: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:54.036: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:54.036: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:54.036: INFO: May 15 21:53:54.036: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:55.039: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:55.040: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:55.040: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:55.040: INFO: May 15 21:53:55.040: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:56.043: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:56.043: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:56.043: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:56.043: INFO: May 15 21:53:56.043: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:57.048: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:57.048: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:57.048: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:57.048: INFO: May 15 21:53:57.048: INFO: StatefulSet ss has not reached scale 0, at 2 May 15 21:53:58.052: INFO: POD NODE PHASE GRACE CONDITIONS May 15 21:53:58.052: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:06 +0000 UTC }] May 15 21:53:58.052: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-15 21:53:27 +0000 UTC }] May 15 21:53:58.052: INFO: May 15 21:53:58.052: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8013 May 15 21:53:59.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:53:59.200: INFO: rc: 1 May 15 21:53:59.200: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 15 21:54:09.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:09.310: INFO: rc: 1 May 15 21:54:09.310: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:54:19.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:19.404: INFO: rc: 1 May 15 21:54:19.404: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:54:29.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:29.525: INFO: rc: 1 May 15 21:54:29.525: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:54:39.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:39.622: INFO: rc: 1 May 15 21:54:39.622: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:54:49.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:49.723: INFO: rc: 1 May 15 21:54:49.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:54:59.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:54:59.840: INFO: rc: 1 May 15 21:54:59.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:55:09.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:55:09.947: INFO: rc: 1 May 15 21:55:09.948: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:55:19.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:55:20.045: INFO: rc: 1 May 15 21:55:20.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:55:30.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:55:30.150: INFO: rc: 1 May 15 21:55:30.150: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:55:40.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:55:40.248: INFO: rc: 1 May 15 21:55:40.248: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:55:50.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:55:50.352: INFO: rc: 1 May 15 21:55:50.352: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:00.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:00.459: INFO: rc: 1 May 15 21:56:00.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:10.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:10.562: INFO: rc: 1 May 15 21:56:10.562: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:20.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:20.653: INFO: rc: 1 May 15 21:56:20.653: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:30.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:30.758: INFO: rc: 1 May 15 21:56:30.758: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:40.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:40.855: INFO: rc: 1 May 15 21:56:40.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:56:50.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:56:50.962: INFO: rc: 1 May 15 21:56:50.962: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:00.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:01.060: INFO: rc: 1 May 15 21:57:01.060: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:11.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:11.153: INFO: rc: 1 May 15 21:57:11.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:21.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:21.245: INFO: rc: 1 May 15 21:57:21.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:31.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:34.263: INFO: rc: 1 May 15 21:57:34.263: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:44.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:44.350: INFO: rc: 1 May 15 21:57:44.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:57:54.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:57:54.461: INFO: rc: 1 May 15 21:57:54.461: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:04.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:04.570: INFO: rc: 1 May 15 21:58:04.570: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:14.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:14.676: INFO: rc: 1 May 15 21:58:14.676: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:24.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:24.788: INFO: rc: 1 May 15 21:58:24.789: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:34.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:34.876: INFO: rc: 1 May 15 21:58:34.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:44.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:44.959: INFO: rc: 1 May 15 21:58:44.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:58:54.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:58:55.053: INFO: rc: 1 May 15 21:58:55.053: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 15 21:59:05.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8013 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 21:59:05.143: INFO: rc: 1 May 15 21:59:05.143: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 15 21:59:05.143: INFO: Scaling statefulset ss to 0 May 15 21:59:05.151: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 21:59:05.153: INFO: Deleting all statefulset in ns statefulset-8013 May 15 21:59:05.156: INFO: Scaling statefulset ss to 0 May 15 21:59:05.163: INFO: Waiting for statefulset status.replicas updated to 0 May 15 21:59:05.165: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:59:05.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8013" for this suite. • [SLOW TEST:358.610 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":151,"skipped":2423,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:59:05.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-sflhc in namespace proxy-3680 I0515 21:59:05.296582 6 runners.go:189] Created replication controller with name: proxy-service-sflhc, namespace: proxy-3680, replica count: 1 I0515 21:59:06.347025 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 21:59:07.347203 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 21:59:08.347422 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:09.347593 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:10.347779 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:11.347973 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:12.348242 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:13.348508 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:14.348711 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:15.348907 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0515 21:59:16.349359 6 runners.go:189] proxy-service-sflhc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 21:59:16.352: INFO: setup took 11.104600837s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 15 21:59:16.361: INFO: (0) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 8.7472ms) May 15 21:59:16.361: INFO: (0) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 9.194327ms) May 15 21:59:16.362: INFO: (0) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 9.936027ms) May 15 21:59:16.366: INFO: (0) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 13.79665ms) May 15 21:59:16.366: INFO: (0) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 13.970168ms) May 15 21:59:16.366: INFO: (0) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 14.269555ms) May 15 21:59:16.367: INFO: (0) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 14.555517ms) May 15 21:59:16.367: INFO: (0) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 14.958105ms) May 15 21:59:16.368: INFO: (0) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 15.207056ms) May 15 21:59:16.368: INFO: (0) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 15.531617ms) May 15 21:59:16.368: INFO: (0) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 15.856948ms) May 15 21:59:16.376: INFO: (0) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 24.112035ms) May 15 21:59:16.376: INFO: (0) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 23.99289ms) May 15 21:59:16.376: INFO: (0) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 23.975496ms) May 15 21:59:16.376: INFO: (0) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 24.192621ms) May 15 21:59:16.377: INFO: (0) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 3.551631ms) May 15 21:59:16.383: INFO: (1) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 5.681808ms) May 15 21:59:16.383: INFO: (1) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 6.41027ms) May 15 21:59:16.383: INFO: (1) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 6.392069ms) May 15 21:59:16.384: INFO: (1) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 6.757087ms) May 15 21:59:16.384: INFO: (1) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test (200; 4.967706ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.984826ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 5.125226ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 5.307934ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 5.492629ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 5.593799ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 5.613556ms) May 15 21:59:16.391: INFO: (2) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 5.585451ms) May 15 21:59:16.392: INFO: (2) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 5.865107ms) May 15 21:59:16.392: INFO: (2) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 5.881951ms) May 15 21:59:16.392: INFO: (2) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 5.805202ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.396239ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.412699ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.501589ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.419973ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 4.546434ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.476697ms) May 15 21:59:16.396: INFO: (3) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 4.508088ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 6.098389ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 6.180107ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 6.419998ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 6.269513ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 6.354411ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 6.458447ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 6.364526ms) May 15 21:59:16.398: INFO: (3) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 6.398592ms) May 15 21:59:16.402: INFO: (4) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 3.909472ms) May 15 21:59:16.402: INFO: (4) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 3.948432ms) May 15 21:59:16.403: INFO: (4) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 4.557841ms) May 15 21:59:16.403: INFO: (4) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 4.516801ms) May 15 21:59:16.403: INFO: (4) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.529486ms) May 15 21:59:16.403: INFO: (4) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 4.572139ms) May 15 21:59:16.404: INFO: (4) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.567377ms) May 15 21:59:16.404: INFO: (4) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 5.788367ms) May 15 21:59:16.404: INFO: (4) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 5.821666ms) May 15 21:59:16.404: INFO: (4) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 6.006313ms) May 15 21:59:16.404: INFO: (4) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 6.103448ms) May 15 21:59:16.414: INFO: (5) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 9.438402ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 18.96872ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 18.971893ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 18.986395ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 18.995694ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 19.066463ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 19.028599ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 19.103451ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 19.535761ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 19.735011ms) May 15 21:59:16.424: INFO: (5) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 19.536245ms) May 15 21:59:16.425: INFO: (5) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test (200; 3.533061ms) May 15 21:59:16.430: INFO: (6) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 3.913128ms) May 15 21:59:16.430: INFO: (6) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 3.150234ms) May 15 21:59:16.431: INFO: (6) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 4.22708ms) May 15 21:59:16.431: INFO: (6) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.350359ms) May 15 21:59:16.431: INFO: (6) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 4.63597ms) May 15 21:59:16.431: INFO: (6) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.112644ms) May 15 21:59:16.431: INFO: (6) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 4.854888ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 5.011866ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 5.087725ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 5.50124ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 5.448512ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 4.737999ms) May 15 21:59:16.432: INFO: (6) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 4.87321ms) May 15 21:59:16.434: INFO: (7) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 2.083884ms) May 15 21:59:16.436: INFO: (7) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 3.711547ms) May 15 21:59:16.436: INFO: (7) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 3.778071ms) May 15 21:59:16.436: INFO: (7) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 4.347235ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 4.628672ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 4.560261ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.638178ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 4.610368ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 4.76583ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 4.833875ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 4.830246ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 4.893999ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 4.821058ms) May 15 21:59:16.437: INFO: (7) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 4.259442ms) May 15 21:59:16.441: INFO: (8) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.370123ms) May 15 21:59:16.441: INFO: (8) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.40902ms) May 15 21:59:16.441: INFO: (8) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 4.36771ms) May 15 21:59:16.441: INFO: (8) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 4.894134ms) May 15 21:59:16.443: INFO: (8) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 6.489818ms) May 15 21:59:16.444: INFO: (8) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 6.750777ms) May 15 21:59:16.444: INFO: (8) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 6.74062ms) May 15 21:59:16.444: INFO: (8) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 6.776118ms) May 15 21:59:16.444: INFO: (8) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 6.830855ms) May 15 21:59:16.444: INFO: (8) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 6.791425ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 3.657613ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 3.921703ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 4.008814ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.167169ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 4.273923ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 4.333505ms) May 15 21:59:16.448: INFO: (9) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 4.300274ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.94471ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.894743ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.01274ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 4.980841ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 5.104252ms) May 15 21:59:16.449: INFO: (9) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 5.594772ms) May 15 21:59:16.450: INFO: (9) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 5.569728ms) May 15 21:59:16.475: INFO: (10) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 25.710628ms) May 15 21:59:16.475: INFO: (10) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 25.827774ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 25.997082ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 25.983432ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 26.093967ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 26.201681ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 26.292161ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 26.462471ms) May 15 21:59:16.476: INFO: (10) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 5.416295ms) May 15 21:59:16.483: INFO: (11) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 5.502968ms) May 15 21:59:16.483: INFO: (11) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 5.459732ms) May 15 21:59:16.483: INFO: (11) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.481168ms) May 15 21:59:16.484: INFO: (11) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 5.897838ms) May 15 21:59:16.484: INFO: (11) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.933989ms) May 15 21:59:16.484: INFO: (11) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test (200; 5.10582ms) May 15 21:59:16.490: INFO: (12) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 5.087444ms) May 15 21:59:16.490: INFO: (12) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.22884ms) May 15 21:59:16.490: INFO: (12) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 5.265909ms) May 15 21:59:16.490: INFO: (12) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 5.319858ms) May 15 21:59:16.491: INFO: (12) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.403131ms) May 15 21:59:16.491: INFO: (12) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 5.365427ms) May 15 21:59:16.491: INFO: (12) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.567438ms) May 15 21:59:16.491: INFO: (12) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 4.565536ms) May 15 21:59:16.495: INFO: (13) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 4.64086ms) May 15 21:59:16.495: INFO: (13) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.612838ms) May 15 21:59:16.495: INFO: (13) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 4.718833ms) May 15 21:59:16.495: INFO: (13) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 4.704006ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.743462ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 4.783232ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.858544ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 4.841633ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 4.81655ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 4.869465ms) May 15 21:59:16.496: INFO: (13) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.903504ms) May 15 21:59:16.499: INFO: (14) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 3.406596ms) May 15 21:59:16.500: INFO: (14) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 3.931263ms) May 15 21:59:16.500: INFO: (14) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.691166ms) May 15 21:59:16.500: INFO: (14) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.702943ms) May 15 21:59:16.501: INFO: (14) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 4.760302ms) May 15 21:59:16.505: INFO: (14) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 9.080005ms) May 15 21:59:16.510: INFO: (14) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 14.245926ms) May 15 21:59:16.510: INFO: (14) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 14.254929ms) May 15 21:59:16.510: INFO: (14) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test (200; 3.828969ms) May 15 21:59:16.514: INFO: (15) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 3.924851ms) May 15 21:59:16.514: INFO: (15) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.08847ms) May 15 21:59:16.514: INFO: (15) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 4.047553ms) May 15 21:59:16.514: INFO: (15) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 4.513281ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.43524ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.465768ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 4.465498ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 4.497856ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 4.499074ms) May 15 21:59:16.515: INFO: (15) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 4.810163ms) May 15 21:59:16.516: INFO: (15) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 5.372373ms) May 15 21:59:16.516: INFO: (15) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 5.358017ms) May 15 21:59:16.518: INFO: (16) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 3.369854ms) May 15 21:59:16.519: INFO: (16) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 3.485642ms) May 15 21:59:16.519: INFO: (16) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 3.620694ms) May 15 21:59:16.520: INFO: (16) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 3.690137ms) May 15 21:59:16.520: INFO: (16) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 3.990353ms) May 15 21:59:16.520: INFO: (16) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 3.965852ms) May 15 21:59:16.520: INFO: (16) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 4.080244ms) May 15 21:59:16.520: INFO: (16) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.029415ms) May 15 21:59:16.521: INFO: (16) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 5.156545ms) May 15 21:59:16.521: INFO: (16) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 5.20492ms) May 15 21:59:16.521: INFO: (16) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 5.257083ms) May 15 21:59:16.521: INFO: (16) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 5.276234ms) May 15 21:59:16.521: INFO: (16) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 5.222812ms) May 15 21:59:16.525: INFO: (17) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: ... (200; 4.308452ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname2/proxy/: bar (200; 5.591205ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.608311ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname1/proxy/: foo (200; 5.641025ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:460/proxy/: tls baz (200; 5.706029ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 5.688719ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 5.731932ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 5.68983ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.668079ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 5.736406ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 5.695543ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 5.752054ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 5.724647ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/http:proxy-service-sflhc:portname1/proxy/: foo (200; 5.766037ms) May 15 21:59:16.527: INFO: (17) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 5.757144ms) May 15 21:59:16.529: INFO: (18) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 2.149919ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.036667ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:1080/proxy/: test<... (200; 4.012568ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.031884ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 4.222735ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:162/proxy/: bar (200; 4.190393ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 4.195998ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:462/proxy/: tls qux (200; 4.21908ms) May 15 21:59:16.531: INFO: (18) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: test<... (200; 6.48653ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/services/proxy-service-sflhc:portname2/proxy/: bar (200; 6.541331ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:1080/proxy/: ... (200; 6.550393ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784/proxy/: test (200; 6.566968ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/pods/http:proxy-service-sflhc-fv784:162/proxy/: bar (200; 6.619614ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname1/proxy/: tls baz (200; 6.598648ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/services/https:proxy-service-sflhc:tlsportname2/proxy/: tls qux (200; 6.653787ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/pods/proxy-service-sflhc-fv784:160/proxy/: foo (200; 6.747012ms) May 15 21:59:16.539: INFO: (19) /api/v1/namespaces/proxy-3680/pods/https:proxy-service-sflhc-fv784:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3345 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3345 to expose endpoints map[] May 15 21:59:29.749: INFO: Get endpoints failed (23.611538ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 15 21:59:30.754: INFO: successfully validated that service endpoint-test2 in namespace services-3345 exposes endpoints map[] (1.028544099s elapsed) STEP: Creating pod pod1 in namespace services-3345 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3345 to expose endpoints map[pod1:[80]] May 15 21:59:34.812: INFO: successfully validated that service endpoint-test2 in namespace services-3345 exposes endpoints map[pod1:[80]] (4.051900541s elapsed) STEP: Creating pod pod2 in namespace services-3345 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3345 to expose endpoints map[pod1:[80] pod2:[80]] May 15 21:59:39.007: INFO: successfully validated that service endpoint-test2 in namespace services-3345 exposes endpoints map[pod1:[80] pod2:[80]] (4.191390134s elapsed) STEP: Deleting pod pod1 in namespace services-3345 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3345 to expose endpoints map[pod2:[80]] May 15 21:59:40.037: INFO: successfully validated that service endpoint-test2 in namespace services-3345 exposes endpoints map[pod2:[80]] (1.02521397s elapsed) STEP: Deleting pod pod2 in namespace services-3345 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3345 to expose endpoints map[] May 15 21:59:41.052: INFO: successfully validated that service endpoint-test2 in namespace services-3345 exposes endpoints map[] (1.010593818s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:59:41.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3345" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.636 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":153,"skipped":2439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:59:41.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f2f160f2-1aad-4f92-a673-f216d4c033a1 STEP: Creating a pod to test consume configMaps May 15 21:59:41.400: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d" in namespace "projected-1937" to be "success or failure" May 15 21:59:41.444: INFO: Pod "pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.519452ms May 15 21:59:43.448: INFO: Pod "pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048456996s May 15 21:59:45.490: INFO: Pod "pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090242145s STEP: Saw pod success May 15 21:59:45.490: INFO: Pod "pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d" satisfied condition "success or failure" May 15 21:59:45.492: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d container projected-configmap-volume-test: STEP: delete the pod May 15 21:59:45.521: INFO: Waiting for pod pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d to disappear May 15 21:59:45.531: INFO: Pod pod-projected-configmaps-503f6e94-62ef-4766-91aa-cf6d5bd1982d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 21:59:45.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1937" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2489,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 21:59:45.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:00:45.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3774" for this suite. • [SLOW TEST:60.093 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2493,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:00:45.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-4601 STEP: creating replication controller nodeport-test in namespace services-4601 I0515 22:00:45.799886 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4601, replica count: 2 I0515 22:00:48.850319 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:00:51.850530 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 22:00:51.850: INFO: Creating new exec pod May 15 22:00:56.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4601 execpodg6zmk -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 15 22:00:57.211: INFO: stderr: "I0515 22:00:57.120499 2413 log.go:172] (0xc000116370) (0xc000598000) Create stream\nI0515 22:00:57.120586 2413 log.go:172] (0xc000116370) (0xc000598000) Stream added, broadcasting: 1\nI0515 22:00:57.123526 2413 log.go:172] (0xc000116370) Reply frame received for 1\nI0515 22:00:57.123555 2413 log.go:172] (0xc000116370) (0xc00070db80) Create stream\nI0515 22:00:57.123574 2413 log.go:172] (0xc000116370) (0xc00070db80) Stream added, broadcasting: 3\nI0515 22:00:57.124463 2413 log.go:172] (0xc000116370) Reply frame received for 3\nI0515 22:00:57.124522 2413 log.go:172] (0xc000116370) (0xc000598140) Create stream\nI0515 22:00:57.124548 2413 log.go:172] (0xc000116370) (0xc000598140) Stream added, broadcasting: 5\nI0515 22:00:57.125529 2413 log.go:172] (0xc000116370) Reply frame received for 5\nI0515 22:00:57.204880 2413 log.go:172] (0xc000116370) Data frame received for 5\nI0515 22:00:57.204917 2413 log.go:172] (0xc000598140) (5) Data frame handling\nI0515 22:00:57.204944 2413 log.go:172] (0xc000598140) (5) Data frame sent\nI0515 22:00:57.204955 2413 log.go:172] (0xc000116370) Data frame received for 5\nI0515 22:00:57.204964 2413 log.go:172] (0xc000598140) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0515 22:00:57.205007 2413 log.go:172] (0xc000116370) Data frame received for 3\nI0515 22:00:57.205035 2413 log.go:172] (0xc00070db80) (3) Data frame handling\nI0515 22:00:57.207092 2413 log.go:172] (0xc000116370) Data frame received for 1\nI0515 22:00:57.207108 2413 log.go:172] (0xc000598000) (1) Data frame handling\nI0515 22:00:57.207117 2413 log.go:172] (0xc000598000) (1) Data frame sent\nI0515 22:00:57.207129 2413 log.go:172] (0xc000116370) (0xc000598000) Stream removed, broadcasting: 1\nI0515 22:00:57.207171 2413 log.go:172] (0xc000116370) Go away received\nI0515 22:00:57.207429 2413 log.go:172] (0xc000116370) (0xc000598000) Stream removed, broadcasting: 1\nI0515 22:00:57.207450 2413 log.go:172] (0xc000116370) (0xc00070db80) Stream removed, broadcasting: 3\nI0515 22:00:57.207473 2413 log.go:172] (0xc000116370) (0xc000598140) Stream removed, broadcasting: 5\n" May 15 22:00:57.211: INFO: stdout: "" May 15 22:00:57.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4601 execpodg6zmk -- /bin/sh -x -c nc -zv -t -w 2 10.105.40.2 80' May 15 22:00:57.502: INFO: stderr: "I0515 22:00:57.330361 2435 log.go:172] (0xc00070ca50) (0xc000b62000) Create stream\nI0515 22:00:57.330418 2435 log.go:172] (0xc00070ca50) (0xc000b62000) Stream added, broadcasting: 1\nI0515 22:00:57.332804 2435 log.go:172] (0xc00070ca50) Reply frame received for 1\nI0515 22:00:57.332895 2435 log.go:172] (0xc00070ca50) (0xc00068da40) Create stream\nI0515 22:00:57.332920 2435 log.go:172] (0xc00070ca50) (0xc00068da40) Stream added, broadcasting: 3\nI0515 22:00:57.334257 2435 log.go:172] (0xc00070ca50) Reply frame received for 3\nI0515 22:00:57.334288 2435 log.go:172] (0xc00070ca50) (0xc000b62140) Create stream\nI0515 22:00:57.334304 2435 log.go:172] (0xc00070ca50) (0xc000b62140) Stream added, broadcasting: 5\nI0515 22:00:57.335164 2435 log.go:172] (0xc00070ca50) Reply frame received for 5\nI0515 22:00:57.496252 2435 log.go:172] (0xc00070ca50) Data frame received for 5\nI0515 22:00:57.496277 2435 log.go:172] (0xc000b62140) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.40.2 80\nConnection to 10.105.40.2 80 port [tcp/http] succeeded!\nI0515 22:00:57.496304 2435 log.go:172] (0xc00070ca50) Data frame received for 3\nI0515 22:00:57.496332 2435 log.go:172] (0xc00068da40) (3) Data frame handling\nI0515 22:00:57.496358 2435 log.go:172] (0xc000b62140) (5) Data frame sent\nI0515 22:00:57.496368 2435 log.go:172] (0xc00070ca50) Data frame received for 5\nI0515 22:00:57.496378 2435 log.go:172] (0xc000b62140) (5) Data frame handling\nI0515 22:00:57.497663 2435 log.go:172] (0xc00070ca50) Data frame received for 1\nI0515 22:00:57.497680 2435 log.go:172] (0xc000b62000) (1) Data frame handling\nI0515 22:00:57.497704 2435 log.go:172] (0xc000b62000) (1) Data frame sent\nI0515 22:00:57.497805 2435 log.go:172] (0xc00070ca50) (0xc000b62000) Stream removed, broadcasting: 1\nI0515 22:00:57.497869 2435 log.go:172] (0xc00070ca50) Go away received\nI0515 22:00:57.498153 2435 log.go:172] (0xc00070ca50) (0xc000b62000) Stream removed, broadcasting: 1\nI0515 22:00:57.498175 2435 log.go:172] (0xc00070ca50) (0xc00068da40) Stream removed, broadcasting: 3\nI0515 22:00:57.498185 2435 log.go:172] (0xc00070ca50) (0xc000b62140) Stream removed, broadcasting: 5\n" May 15 22:00:57.502: INFO: stdout: "" May 15 22:00:57.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4601 execpodg6zmk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30905' May 15 22:00:57.720: INFO: stderr: "I0515 22:00:57.628295 2459 log.go:172] (0xc0000f49a0) (0xc000910140) Create stream\nI0515 22:00:57.628359 2459 log.go:172] (0xc0000f49a0) (0xc000910140) Stream added, broadcasting: 1\nI0515 22:00:57.630499 2459 log.go:172] (0xc0000f49a0) Reply frame received for 1\nI0515 22:00:57.630529 2459 log.go:172] (0xc0000f49a0) (0xc00070f5e0) Create stream\nI0515 22:00:57.630538 2459 log.go:172] (0xc0000f49a0) (0xc00070f5e0) Stream added, broadcasting: 3\nI0515 22:00:57.631292 2459 log.go:172] (0xc0000f49a0) Reply frame received for 3\nI0515 22:00:57.631324 2459 log.go:172] (0xc0000f49a0) (0xc0005e7c20) Create stream\nI0515 22:00:57.631333 2459 log.go:172] (0xc0000f49a0) (0xc0005e7c20) Stream added, broadcasting: 5\nI0515 22:00:57.632277 2459 log.go:172] (0xc0000f49a0) Reply frame received for 5\nI0515 22:00:57.714060 2459 log.go:172] (0xc0000f49a0) Data frame received for 3\nI0515 22:00:57.714100 2459 log.go:172] (0xc00070f5e0) (3) Data frame handling\nI0515 22:00:57.714125 2459 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0515 22:00:57.714136 2459 log.go:172] (0xc0005e7c20) (5) Data frame handling\nI0515 22:00:57.714153 2459 log.go:172] (0xc0005e7c20) (5) Data frame sent\nI0515 22:00:57.714164 2459 log.go:172] (0xc0000f49a0) Data frame received for 5\nI0515 22:00:57.714172 2459 log.go:172] (0xc0005e7c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30905\nConnection to 172.17.0.10 30905 port [tcp/30905] succeeded!\nI0515 22:00:57.715557 2459 log.go:172] (0xc0000f49a0) Data frame received for 1\nI0515 22:00:57.715574 2459 log.go:172] (0xc000910140) (1) Data frame handling\nI0515 22:00:57.715590 2459 log.go:172] (0xc000910140) (1) Data frame sent\nI0515 22:00:57.715603 2459 log.go:172] (0xc0000f49a0) (0xc000910140) Stream removed, broadcasting: 1\nI0515 22:00:57.715614 2459 log.go:172] (0xc0000f49a0) Go away received\nI0515 22:00:57.716084 2459 log.go:172] (0xc0000f49a0) (0xc000910140) Stream removed, broadcasting: 1\nI0515 22:00:57.716110 2459 log.go:172] (0xc0000f49a0) (0xc00070f5e0) Stream removed, broadcasting: 3\nI0515 22:00:57.716119 2459 log.go:172] (0xc0000f49a0) (0xc0005e7c20) Stream removed, broadcasting: 5\n" May 15 22:00:57.721: INFO: stdout: "" May 15 22:00:57.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4601 execpodg6zmk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30905' May 15 22:00:57.919: INFO: stderr: "I0515 22:00:57.849062 2482 log.go:172] (0xc0009c4580) (0xc000b14000) Create stream\nI0515 22:00:57.849320 2482 log.go:172] (0xc0009c4580) (0xc000b14000) Stream added, broadcasting: 1\nI0515 22:00:57.851166 2482 log.go:172] (0xc0009c4580) Reply frame received for 1\nI0515 22:00:57.851206 2482 log.go:172] (0xc0009c4580) (0xc00068ba40) Create stream\nI0515 22:00:57.851223 2482 log.go:172] (0xc0009c4580) (0xc00068ba40) Stream added, broadcasting: 3\nI0515 22:00:57.851919 2482 log.go:172] (0xc0009c4580) Reply frame received for 3\nI0515 22:00:57.851963 2482 log.go:172] (0xc0009c4580) (0xc0004ae000) Create stream\nI0515 22:00:57.851980 2482 log.go:172] (0xc0009c4580) (0xc0004ae000) Stream added, broadcasting: 5\nI0515 22:00:57.852671 2482 log.go:172] (0xc0009c4580) Reply frame received for 5\nI0515 22:00:57.910862 2482 log.go:172] (0xc0009c4580) Data frame received for 5\nI0515 22:00:57.910887 2482 log.go:172] (0xc0004ae000) (5) Data frame handling\nI0515 22:00:57.910908 2482 log.go:172] (0xc0004ae000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30905\nI0515 22:00:57.911296 2482 log.go:172] (0xc0009c4580) Data frame received for 5\nI0515 22:00:57.911318 2482 log.go:172] (0xc0004ae000) (5) Data frame handling\nI0515 22:00:57.911331 2482 log.go:172] (0xc0004ae000) (5) Data frame sent\nConnection to 172.17.0.8 30905 port [tcp/30905] succeeded!\nI0515 22:00:57.911846 2482 log.go:172] (0xc0009c4580) Data frame received for 5\nI0515 22:00:57.911879 2482 log.go:172] (0xc0004ae000) (5) Data frame handling\nI0515 22:00:57.911987 2482 log.go:172] (0xc0009c4580) Data frame received for 3\nI0515 22:00:57.912002 2482 log.go:172] (0xc00068ba40) (3) Data frame handling\nI0515 22:00:57.913941 2482 log.go:172] (0xc0009c4580) Data frame received for 1\nI0515 22:00:57.913959 2482 log.go:172] (0xc000b14000) (1) Data frame handling\nI0515 22:00:57.913971 2482 log.go:172] (0xc000b14000) (1) Data frame sent\nI0515 22:00:57.913986 2482 log.go:172] (0xc0009c4580) (0xc000b14000) Stream removed, broadcasting: 1\nI0515 22:00:57.914018 2482 log.go:172] (0xc0009c4580) Go away received\nI0515 22:00:57.914430 2482 log.go:172] (0xc0009c4580) (0xc000b14000) Stream removed, broadcasting: 1\nI0515 22:00:57.914465 2482 log.go:172] (0xc0009c4580) (0xc00068ba40) Stream removed, broadcasting: 3\nI0515 22:00:57.914475 2482 log.go:172] (0xc0009c4580) (0xc0004ae000) Stream removed, broadcasting: 5\n" May 15 22:00:57.919: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:00:57.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4601" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.301 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":156,"skipped":2503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:00:57.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 15 22:00:58.055: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:00:58.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9723" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":157,"skipped":2526,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:00:58.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:09.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5054" for this suite. • [SLOW TEST:11.265 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":158,"skipped":2543,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:09.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:01:09.466: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:10.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5376" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":159,"skipped":2548,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:10.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:01:10.221: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 22:01:12.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8946 create -f -' May 15 22:01:16.026: INFO: stderr: "" May 15 22:01:16.026: INFO: stdout: "e2e-test-crd-publish-openapi-3217-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 15 22:01:16.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8946 delete e2e-test-crd-publish-openapi-3217-crds test-cr' May 15 22:01:16.154: INFO: stderr: "" May 15 22:01:16.154: INFO: stdout: "e2e-test-crd-publish-openapi-3217-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 15 22:01:16.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8946 apply -f -' May 15 22:01:16.410: INFO: stderr: "" May 15 22:01:16.410: INFO: stdout: "e2e-test-crd-publish-openapi-3217-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 15 22:01:16.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8946 delete e2e-test-crd-publish-openapi-3217-crds test-cr' May 15 22:01:16.520: INFO: stderr: "" May 15 22:01:16.520: INFO: stdout: "e2e-test-crd-publish-openapi-3217-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 15 22:01:16.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3217-crds' May 15 22:01:16.750: INFO: stderr: "" May 15 22:01:16.750: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3217-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:18.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8946" for this suite. • [SLOW TEST:8.471 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":160,"skipped":2550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:18.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-144d7e21-1447-4a02-9cc2-b5475ee15605 STEP: Creating a pod to test consume configMaps May 15 22:01:18.787: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20920538-9677-422b-9564-530915649289" in namespace "projected-8559" to be "success or failure" May 15 22:01:18.790: INFO: Pod "pod-projected-configmaps-20920538-9677-422b-9564-530915649289": Phase="Pending", Reason="", readiness=false. Elapsed: 3.229459ms May 15 22:01:20.795: INFO: Pod "pod-projected-configmaps-20920538-9677-422b-9564-530915649289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007805879s May 15 22:01:22.799: INFO: Pod "pod-projected-configmaps-20920538-9677-422b-9564-530915649289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012092409s STEP: Saw pod success May 15 22:01:22.799: INFO: Pod "pod-projected-configmaps-20920538-9677-422b-9564-530915649289" satisfied condition "success or failure" May 15 22:01:22.802: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-20920538-9677-422b-9564-530915649289 container projected-configmap-volume-test: STEP: delete the pod May 15 22:01:22.906: INFO: Waiting for pod pod-projected-configmaps-20920538-9677-422b-9564-530915649289 to disappear May 15 22:01:22.938: INFO: Pod pod-projected-configmaps-20920538-9677-422b-9564-530915649289 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:22.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8559" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2613,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:22.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 15 22:01:28.318: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:28.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9355" for this suite. • [SLOW TEST:5.560 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:28.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:01:32.801: INFO: Waiting up to 5m0s for pod "client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8" in namespace "pods-354" to be "success or failure" May 15 22:01:32.823: INFO: Pod "client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.425748ms May 15 22:01:34.944: INFO: Pod "client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143451926s May 15 22:01:36.950: INFO: Pod "client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148792223s STEP: Saw pod success May 15 22:01:36.950: INFO: Pod "client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8" satisfied condition "success or failure" May 15 22:01:36.953: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8 container env3cont: STEP: delete the pod May 15 22:01:37.006: INFO: Waiting for pod client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8 to disappear May 15 22:01:37.018: INFO: Pod client-envvars-fc3bea83-57a5-410b-8f0a-ed02a2d775a8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:37.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-354" for this suite. • [SLOW TEST:8.521 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2666,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:37.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:01:37.839: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:01:40.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176897, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176897, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176897, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176897, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:01:43.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:44.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8628" for this suite. STEP: Destroying namespace "webhook-8628-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.170 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":164,"skipped":2667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:44.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 22:01:44.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3699' May 15 22:01:44.358: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 22:01:44.358: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 15 22:01:44.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-3699' May 15 22:01:44.566: INFO: stderr: "" May 15 22:01:44.566: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:44.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3699" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":165,"skipped":2697,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:44.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3630.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3630.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3630.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3630.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3630.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 22:01:53.419: INFO: DNS probes using dns-3630/dns-test-99148506-0ced-4389-af05-d68198bbb05a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:53.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3630" for this suite. • [SLOW TEST:9.355 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":166,"skipped":2707,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:53.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:01:54.361: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 6.070277ms) May 15 22:01:54.364: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.848706ms) May 15 22:01:54.367: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.155083ms) May 15 22:01:54.371: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.227957ms) May 15 22:01:54.373: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.702268ms) May 15 22:01:54.376: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.993644ms) May 15 22:01:54.379: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.899122ms) May 15 22:01:54.382: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.221754ms) May 15 22:01:54.384: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.309134ms) May 15 22:01:54.461: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 76.980144ms) May 15 22:01:54.466: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.771988ms) May 15 22:01:54.470: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.690129ms) May 15 22:01:54.473: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.014234ms) May 15 22:01:54.476: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.744851ms) May 15 22:01:54.479: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.333313ms) May 15 22:01:54.482: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.019956ms) May 15 22:01:54.485: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.212179ms) May 15 22:01:54.488: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.525273ms) May 15 22:01:54.491: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.020395ms) May 15 22:01:54.494: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.373493ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:01:54.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7553" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":167,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:01:54.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:01:55.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:01:57.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176915, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176915, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176915, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725176915, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:02:00.634: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:02:00.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9629-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:02:01.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9055" for this suite. STEP: Destroying namespace "webhook-9055-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.080 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":168,"skipped":2749,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:02:01.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1763/configmap-test-2d774e1e-ce9b-4e61-891e-0318352d619d STEP: Creating a pod to test consume configMaps May 15 22:02:01.708: INFO: Waiting up to 5m0s for pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647" in namespace "configmap-1763" to be "success or failure" May 15 22:02:01.724: INFO: Pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567791ms May 15 22:02:03.728: INFO: Pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019884738s May 15 22:02:05.731: INFO: Pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647": Phase="Running", Reason="", readiness=true. Elapsed: 4.023505172s May 15 22:02:07.735: INFO: Pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027083965s STEP: Saw pod success May 15 22:02:07.735: INFO: Pod "pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647" satisfied condition "success or failure" May 15 22:02:07.738: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647 container env-test: STEP: delete the pod May 15 22:02:07.802: INFO: Waiting for pod pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647 to disappear May 15 22:02:07.807: INFO: Pod pod-configmaps-2007ea7d-dfa5-4246-a673-7f1d67211647 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:02:07.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1763" for this suite. • [SLOW TEST:6.232 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:02:07.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:02:24.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9721" for this suite. • [SLOW TEST:17.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":170,"skipped":2852,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:02:24.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 15 22:02:25.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480263 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 22:02:25.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480263 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 15 22:02:35.058: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480300 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 15 22:02:35.058: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480300 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 15 22:02:45.066: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480328 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 22:02:45.067: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480328 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 15 22:02:55.072: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480360 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 22:02:55.072: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-a b10f3c40-20d5-4361-9ad4-7541ef67e721 16480360 0 2020-05-15 22:02:25 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 15 22:03:05.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-b 6480c1d8-386d-42f7-ae8b-99ea9dbef198 16480390 0 2020-05-15 22:03:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 22:03:05.080: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-b 6480c1d8-386d-42f7-ae8b-99ea9dbef198 16480390 0 2020-05-15 22:03:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 15 22:03:15.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-b 6480c1d8-386d-42f7-ae8b-99ea9dbef198 16480420 0 2020-05-15 22:03:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 22:03:15.087: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1978 /api/v1/namespaces/watch-1978/configmaps/e2e-watch-test-configmap-b 6480c1d8-386d-42f7-ae8b-99ea9dbef198 16480420 0 2020-05-15 22:03:05 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:03:25.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1978" for this suite. • [SLOW TEST:60.116 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":171,"skipped":2869,"failed":0} [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:03:25.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5680 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5680 STEP: creating replication controller externalsvc in namespace services-5680 I0515 22:03:25.317565 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5680, replica count: 2 I0515 22:03:28.367945 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:03:31.368133 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 15 22:03:31.417: INFO: Creating new exec pod May 15 22:03:35.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5680 execpodljg6v -- /bin/sh -x -c nslookup nodeport-service' May 15 22:03:35.638: INFO: stderr: "I0515 22:03:35.561963 2666 log.go:172] (0xc000acb8c0) (0xc000a14820) Create stream\nI0515 22:03:35.562014 2666 log.go:172] (0xc000acb8c0) (0xc000a14820) Stream added, broadcasting: 1\nI0515 22:03:35.565680 2666 log.go:172] (0xc000acb8c0) Reply frame received for 1\nI0515 22:03:35.565743 2666 log.go:172] (0xc000acb8c0) (0xc0006c5c20) Create stream\nI0515 22:03:35.565760 2666 log.go:172] (0xc000acb8c0) (0xc0006c5c20) Stream added, broadcasting: 3\nI0515 22:03:35.566886 2666 log.go:172] (0xc000acb8c0) Reply frame received for 3\nI0515 22:03:35.566923 2666 log.go:172] (0xc000acb8c0) (0xc00065e820) Create stream\nI0515 22:03:35.566937 2666 log.go:172] (0xc000acb8c0) (0xc00065e820) Stream added, broadcasting: 5\nI0515 22:03:35.567900 2666 log.go:172] (0xc000acb8c0) Reply frame received for 5\nI0515 22:03:35.620868 2666 log.go:172] (0xc000acb8c0) Data frame received for 5\nI0515 22:03:35.620893 2666 log.go:172] (0xc00065e820) (5) Data frame handling\nI0515 22:03:35.620909 2666 log.go:172] (0xc00065e820) (5) Data frame sent\n+ nslookup nodeport-service\nI0515 22:03:35.629568 2666 log.go:172] (0xc000acb8c0) Data frame received for 3\nI0515 22:03:35.629610 2666 log.go:172] (0xc0006c5c20) (3) Data frame handling\nI0515 22:03:35.629637 2666 log.go:172] (0xc0006c5c20) (3) Data frame sent\nI0515 22:03:35.630531 2666 log.go:172] (0xc000acb8c0) Data frame received for 3\nI0515 22:03:35.630546 2666 log.go:172] (0xc0006c5c20) (3) Data frame handling\nI0515 22:03:35.630558 2666 log.go:172] (0xc0006c5c20) (3) Data frame sent\nI0515 22:03:35.631198 2666 log.go:172] (0xc000acb8c0) Data frame received for 5\nI0515 22:03:35.631243 2666 log.go:172] (0xc00065e820) (5) Data frame handling\nI0515 22:03:35.631277 2666 log.go:172] (0xc000acb8c0) Data frame received for 3\nI0515 22:03:35.631292 2666 log.go:172] (0xc0006c5c20) (3) Data frame handling\nI0515 22:03:35.632841 2666 log.go:172] (0xc000acb8c0) Data frame received for 1\nI0515 22:03:35.632858 2666 log.go:172] (0xc000a14820) (1) Data frame handling\nI0515 22:03:35.632871 2666 log.go:172] (0xc000a14820) (1) Data frame sent\nI0515 22:03:35.632884 2666 log.go:172] (0xc000acb8c0) (0xc000a14820) Stream removed, broadcasting: 1\nI0515 22:03:35.632924 2666 log.go:172] (0xc000acb8c0) Go away received\nI0515 22:03:35.633557 2666 log.go:172] (0xc000acb8c0) (0xc000a14820) Stream removed, broadcasting: 1\nI0515 22:03:35.633594 2666 log.go:172] (0xc000acb8c0) (0xc0006c5c20) Stream removed, broadcasting: 3\nI0515 22:03:35.633616 2666 log.go:172] (0xc000acb8c0) (0xc00065e820) Stream removed, broadcasting: 5\n" May 15 22:03:35.639: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5680.svc.cluster.local\tcanonical name = externalsvc.services-5680.svc.cluster.local.\nName:\texternalsvc.services-5680.svc.cluster.local\nAddress: 10.107.228.252\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5680, will wait for the garbage collector to delete the pods May 15 22:03:35.699: INFO: Deleting ReplicationController externalsvc took: 7.130411ms May 15 22:03:35.999: INFO: Terminating ReplicationController externalsvc pods took: 300.222193ms May 15 22:03:49.664: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:03:49.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5680" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:24.623 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":172,"skipped":2869,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:03:49.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 15 22:03:49.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3101' May 15 22:03:50.210: INFO: stderr: "" May 15 22:03:50.210: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 22:03:50.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:03:50.338: INFO: stderr: "" May 15 22:03:50.338: INFO: stdout: "update-demo-nautilus-g5rkd update-demo-nautilus-jc8jk " May 15 22:03:50.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:03:50.443: INFO: stderr: "" May 15 22:03:50.443: INFO: stdout: "" May 15 22:03:50.443: INFO: update-demo-nautilus-g5rkd is created but not running May 15 22:03:55.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:03:55.564: INFO: stderr: "" May 15 22:03:55.564: INFO: stdout: "update-demo-nautilus-g5rkd update-demo-nautilus-jc8jk " May 15 22:03:55.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:03:55.653: INFO: stderr: "" May 15 22:03:55.653: INFO: stdout: "true" May 15 22:03:55.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:03:56.007: INFO: stderr: "" May 15 22:03:56.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:03:56.008: INFO: validating pod update-demo-nautilus-g5rkd May 15 22:03:56.042: INFO: got data: { "image": "nautilus.jpg" } May 15 22:03:56.042: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:03:56.042: INFO: update-demo-nautilus-g5rkd is verified up and running May 15 22:03:56.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jc8jk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:03:56.139: INFO: stderr: "" May 15 22:03:56.139: INFO: stdout: "true" May 15 22:03:56.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jc8jk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:03:56.238: INFO: stderr: "" May 15 22:03:56.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:03:56.238: INFO: validating pod update-demo-nautilus-jc8jk May 15 22:03:56.242: INFO: got data: { "image": "nautilus.jpg" } May 15 22:03:56.242: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:03:56.242: INFO: update-demo-nautilus-jc8jk is verified up and running STEP: scaling down the replication controller May 15 22:03:56.243: INFO: scanned /root for discovery docs: May 15 22:03:56.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3101' May 15 22:03:57.370: INFO: stderr: "" May 15 22:03:57.370: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 22:03:57.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:03:57.471: INFO: stderr: "" May 15 22:03:57.471: INFO: stdout: "update-demo-nautilus-g5rkd update-demo-nautilus-jc8jk " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 22:04:02.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:04:02.579: INFO: stderr: "" May 15 22:04:02.579: INFO: stdout: "update-demo-nautilus-g5rkd update-demo-nautilus-jc8jk " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 22:04:07.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:04:07.682: INFO: stderr: "" May 15 22:04:07.682: INFO: stdout: "update-demo-nautilus-g5rkd update-demo-nautilus-jc8jk " STEP: Replicas for name=update-demo: expected=1 actual=2 May 15 22:04:12.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:04:12.780: INFO: stderr: "" May 15 22:04:12.780: INFO: stdout: "update-demo-nautilus-g5rkd " May 15 22:04:12.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:12.874: INFO: stderr: "" May 15 22:04:12.874: INFO: stdout: "true" May 15 22:04:12.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:12.972: INFO: stderr: "" May 15 22:04:12.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:04:12.972: INFO: validating pod update-demo-nautilus-g5rkd May 15 22:04:12.975: INFO: got data: { "image": "nautilus.jpg" } May 15 22:04:12.975: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:04:12.975: INFO: update-demo-nautilus-g5rkd is verified up and running STEP: scaling up the replication controller May 15 22:04:12.978: INFO: scanned /root for discovery docs: May 15 22:04:12.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3101' May 15 22:04:14.154: INFO: stderr: "" May 15 22:04:14.154: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 22:04:14.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:04:14.257: INFO: stderr: "" May 15 22:04:14.257: INFO: stdout: "update-demo-nautilus-b7pgq update-demo-nautilus-g5rkd " May 15 22:04:14.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7pgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:14.350: INFO: stderr: "" May 15 22:04:14.351: INFO: stdout: "" May 15 22:04:14.351: INFO: update-demo-nautilus-b7pgq is created but not running May 15 22:04:19.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3101' May 15 22:04:19.448: INFO: stderr: "" May 15 22:04:19.448: INFO: stdout: "update-demo-nautilus-b7pgq update-demo-nautilus-g5rkd " May 15 22:04:19.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7pgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:19.547: INFO: stderr: "" May 15 22:04:19.547: INFO: stdout: "true" May 15 22:04:19.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7pgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:19.637: INFO: stderr: "" May 15 22:04:19.637: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:04:19.637: INFO: validating pod update-demo-nautilus-b7pgq May 15 22:04:19.640: INFO: got data: { "image": "nautilus.jpg" } May 15 22:04:19.641: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:04:19.641: INFO: update-demo-nautilus-b7pgq is verified up and running May 15 22:04:19.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:19.731: INFO: stderr: "" May 15 22:04:19.731: INFO: stdout: "true" May 15 22:04:19.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g5rkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3101' May 15 22:04:19.823: INFO: stderr: "" May 15 22:04:19.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:04:19.824: INFO: validating pod update-demo-nautilus-g5rkd May 15 22:04:19.827: INFO: got data: { "image": "nautilus.jpg" } May 15 22:04:19.827: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:04:19.827: INFO: update-demo-nautilus-g5rkd is verified up and running STEP: using delete to clean up resources May 15 22:04:19.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3101' May 15 22:04:19.938: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:04:19.938: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 22:04:19.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3101' May 15 22:04:20.111: INFO: stderr: "No resources found in kubectl-3101 namespace.\n" May 15 22:04:20.111: INFO: stdout: "" May 15 22:04:20.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3101 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 22:04:20.244: INFO: stderr: "" May 15 22:04:20.244: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:04:20.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3101" for this suite. • [SLOW TEST:30.529 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":173,"skipped":2873,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:04:20.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0515 22:04:51.439927 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 22:04:51.439: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:04:51.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7994" for this suite. • [SLOW TEST:31.197 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":174,"skipped":2892,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:04:51.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:04:51.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f" in namespace "downward-api-6851" to be "success or failure" May 15 22:04:51.520: INFO: Pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.758415ms May 15 22:04:53.525: INFO: Pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00878189s May 15 22:04:55.530: INFO: Pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f": Phase="Running", Reason="", readiness=true. Elapsed: 4.01393076s May 15 22:04:57.534: INFO: Pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017767043s STEP: Saw pod success May 15 22:04:57.534: INFO: Pod "downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f" satisfied condition "success or failure" May 15 22:04:57.536: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f container client-container: STEP: delete the pod May 15 22:04:57.576: INFO: Waiting for pod downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f to disappear May 15 22:04:57.731: INFO: Pod downwardapi-volume-38192651-3c85-4861-8260-4c5cd821e08f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:04:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6851" for this suite. • [SLOW TEST:6.293 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2907,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:04:57.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:04:58.801: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:05:00.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177098, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177098, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177098, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177098, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:05:04.061: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:05:16.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4647" for this suite. STEP: Destroying namespace "webhook-4647-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.613 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":176,"skipped":2927,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:05:16.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 15 22:05:16.400: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:05:23.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4374" for this suite. • [SLOW TEST:7.627 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":177,"skipped":2927,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:05:23.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:05:24.112: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 15 22:05:24.295: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:24.316: INFO: Number of nodes with available pods: 0 May 15 22:05:24.316: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:25.320: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:25.324: INFO: Number of nodes with available pods: 0 May 15 22:05:25.324: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:26.323: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:26.326: INFO: Number of nodes with available pods: 0 May 15 22:05:26.326: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:27.321: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:27.324: INFO: Number of nodes with available pods: 0 May 15 22:05:27.324: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:28.322: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:28.325: INFO: Number of nodes with available pods: 1 May 15 22:05:28.325: INFO: Node jerma-worker2 is running more than one daemon pod May 15 22:05:29.323: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:29.327: INFO: Number of nodes with available pods: 2 May 15 22:05:29.327: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 15 22:05:29.383: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:29.383: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:29.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:30.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:30.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:30.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:31.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:31.392: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:31.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:32.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:32.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:32.391: INFO: Pod daemon-set-np464 is not available May 15 22:05:32.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:33.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:33.392: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:33.392: INFO: Pod daemon-set-np464 is not available May 15 22:05:33.397: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:34.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:34.392: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:34.392: INFO: Pod daemon-set-np464 is not available May 15 22:05:34.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:35.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:35.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:35.391: INFO: Pod daemon-set-np464 is not available May 15 22:05:35.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:36.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:36.392: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:36.392: INFO: Pod daemon-set-np464 is not available May 15 22:05:36.397: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:37.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:37.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:37.391: INFO: Pod daemon-set-np464 is not available May 15 22:05:37.393: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:38.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:38.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:38.391: INFO: Pod daemon-set-np464 is not available May 15 22:05:38.394: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:39.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:39.391: INFO: Wrong image for pod: daemon-set-np464. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:39.391: INFO: Pod daemon-set-np464 is not available May 15 22:05:39.394: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:40.390: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:40.390: INFO: Pod daemon-set-vq5m9 is not available May 15 22:05:40.392: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:41.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:41.391: INFO: Pod daemon-set-vq5m9 is not available May 15 22:05:41.394: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:42.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:42.391: INFO: Pod daemon-set-vq5m9 is not available May 15 22:05:42.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:43.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:43.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:44.420: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:44.425: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:45.391: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:45.391: INFO: Pod daemon-set-64sxn is not available May 15 22:05:45.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:46.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:46.392: INFO: Pod daemon-set-64sxn is not available May 15 22:05:46.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:47.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:47.392: INFO: Pod daemon-set-64sxn is not available May 15 22:05:47.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:48.392: INFO: Wrong image for pod: daemon-set-64sxn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 15 22:05:48.392: INFO: Pod daemon-set-64sxn is not available May 15 22:05:48.398: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:49.391: INFO: Pod daemon-set-b7npd is not available May 15 22:05:49.393: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 15 22:05:49.396: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:49.398: INFO: Number of nodes with available pods: 1 May 15 22:05:49.398: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:50.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:50.405: INFO: Number of nodes with available pods: 1 May 15 22:05:50.405: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:51.402: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:51.405: INFO: Number of nodes with available pods: 1 May 15 22:05:51.405: INFO: Node jerma-worker is running more than one daemon pod May 15 22:05:52.401: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:05:52.403: INFO: Number of nodes with available pods: 2 May 15 22:05:52.403: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5751, will wait for the garbage collector to delete the pods May 15 22:05:52.516: INFO: Deleting DaemonSet.extensions daemon-set took: 50.448477ms May 15 22:05:53.016: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.245799ms May 15 22:05:56.120: INFO: Number of nodes with available pods: 0 May 15 22:05:56.120: INFO: Number of running nodes: 0, number of available pods: 0 May 15 22:05:56.122: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5751/daemonsets","resourceVersion":"16481328"},"items":null} May 15 22:05:56.125: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5751/pods","resourceVersion":"16481328"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:05:56.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5751" for this suite. • [SLOW TEST:32.158 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":178,"skipped":2932,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:05:56.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 22:06:02.284: INFO: DNS probes using dns-test-b15983f9-9885-4a00-981c-e3438fc1a934 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 22:06:10.395: INFO: File wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:10.399: INFO: File jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:10.399: INFO: Lookups using dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 failed for: [wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local] May 15 22:06:15.404: INFO: File wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:15.409: INFO: File jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:15.409: INFO: Lookups using dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 failed for: [wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local] May 15 22:06:20.403: INFO: File wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:20.407: INFO: File jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:20.407: INFO: Lookups using dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 failed for: [wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local] May 15 22:06:25.404: INFO: File wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:25.408: INFO: File jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:25.408: INFO: Lookups using dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 failed for: [wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local] May 15 22:06:30.410: INFO: File jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local from pod dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 contains 'foo.example.com. ' instead of 'bar.example.com.' May 15 22:06:30.410: INFO: Lookups using dns-5354/dns-test-f3067806-2158-46cc-9212-879a8d184909 failed for: [jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local] May 15 22:06:35.408: INFO: DNS probes using dns-test-f3067806-2158-46cc-9212-879a8d184909 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5354.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5354.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 22:06:44.402: INFO: DNS probes using dns-test-cc6e5d30-d946-4d7c-b0a1-a618170c8a04 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:06:45.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5354" for this suite. • [SLOW TEST:49.136 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":179,"skipped":2932,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:06:45.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 15 22:06:45.412: INFO: >>> kubeConfig: /root/.kube/config May 15 22:06:47.357: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:06:57.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2696" for this suite. • [SLOW TEST:12.159 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":180,"skipped":2940,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:06:57.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-abe7de2b-d2bf-416f-ac58-7d4fe6becd90 STEP: Creating a pod to test consume configMaps May 15 22:06:57.748: INFO: Waiting up to 5m0s for pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33" in namespace "configmap-9060" to be "success or failure" May 15 22:06:57.774: INFO: Pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33": Phase="Pending", Reason="", readiness=false. Elapsed: 25.634014ms May 15 22:06:59.847: INFO: Pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098306651s May 15 22:07:01.851: INFO: Pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33": Phase="Running", Reason="", readiness=true. Elapsed: 4.102554714s May 15 22:07:03.855: INFO: Pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107215765s STEP: Saw pod success May 15 22:07:03.856: INFO: Pod "pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33" satisfied condition "success or failure" May 15 22:07:03.859: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33 container configmap-volume-test: STEP: delete the pod May 15 22:07:03.907: INFO: Waiting for pod pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33 to disappear May 15 22:07:03.917: INFO: Pod pod-configmaps-dbe249d8-a9f1-4031-8d2c-c0b26c639d33 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:03.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9060" for this suite. • [SLOW TEST:6.488 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2942,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:03.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 15 22:07:04.051: INFO: Waiting up to 5m0s for pod "pod-34ad92f6-adc3-45e9-9bb3-534444af2213" in namespace "emptydir-9014" to be "success or failure" May 15 22:07:04.072: INFO: Pod "pod-34ad92f6-adc3-45e9-9bb3-534444af2213": Phase="Pending", Reason="", readiness=false. Elapsed: 20.671897ms May 15 22:07:06.344: INFO: Pod "pod-34ad92f6-adc3-45e9-9bb3-534444af2213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292815803s May 15 22:07:08.346: INFO: Pod "pod-34ad92f6-adc3-45e9-9bb3-534444af2213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.295501183s STEP: Saw pod success May 15 22:07:08.346: INFO: Pod "pod-34ad92f6-adc3-45e9-9bb3-534444af2213" satisfied condition "success or failure" May 15 22:07:08.348: INFO: Trying to get logs from node jerma-worker pod pod-34ad92f6-adc3-45e9-9bb3-534444af2213 container test-container: STEP: delete the pod May 15 22:07:08.374: INFO: Waiting for pod pod-34ad92f6-adc3-45e9-9bb3-534444af2213 to disappear May 15 22:07:08.379: INFO: Pod pod-34ad92f6-adc3-45e9-9bb3-534444af2213 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:08.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9014" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2961,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:08.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:07:08.644: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 15 22:07:10.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 create -f -' May 15 22:07:13.875: INFO: stderr: "" May 15 22:07:13.875: INFO: stdout: "e2e-test-crd-publish-openapi-8103-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 15 22:07:13.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 delete e2e-test-crd-publish-openapi-8103-crds test-foo' May 15 22:07:14.008: INFO: stderr: "" May 15 22:07:14.008: INFO: stdout: "e2e-test-crd-publish-openapi-8103-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 15 22:07:14.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 apply -f -' May 15 22:07:14.310: INFO: stderr: "" May 15 22:07:14.310: INFO: stdout: "e2e-test-crd-publish-openapi-8103-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 15 22:07:14.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 delete e2e-test-crd-publish-openapi-8103-crds test-foo' May 15 22:07:14.425: INFO: stderr: "" May 15 22:07:14.425: INFO: stdout: "e2e-test-crd-publish-openapi-8103-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 15 22:07:14.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 create -f -' May 15 22:07:14.721: INFO: rc: 1 May 15 22:07:14.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 apply -f -' May 15 22:07:14.982: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 15 22:07:14.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 create -f -' May 15 22:07:15.249: INFO: rc: 1 May 15 22:07:15.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9206 apply -f -' May 15 22:07:15.525: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 15 22:07:15.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8103-crds' May 15 22:07:15.802: INFO: stderr: "" May 15 22:07:15.802: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8103-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 15 22:07:15.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8103-crds.metadata' May 15 22:07:16.072: INFO: stderr: "" May 15 22:07:16.072: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8103-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 15 22:07:16.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8103-crds.spec' May 15 22:07:16.362: INFO: stderr: "" May 15 22:07:16.362: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8103-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 15 22:07:16.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8103-crds.spec.bars' May 15 22:07:16.630: INFO: stderr: "" May 15 22:07:16.630: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8103-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 15 22:07:16.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8103-crds.spec.bars2' May 15 22:07:16.922: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:19.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9206" for this suite. • [SLOW TEST:11.430 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":183,"skipped":2964,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:19.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:23.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5120" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:23.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 15 22:07:24.065: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:40.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7679" for this suite. • [SLOW TEST:16.676 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":185,"skipped":3028,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:40.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 15 22:07:40.735: INFO: Waiting up to 5m0s for pod "pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0" in namespace "emptydir-2129" to be "success or failure" May 15 22:07:40.739: INFO: Pod "pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24163ms May 15 22:07:42.742: INFO: Pod "pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007706539s May 15 22:07:44.747: INFO: Pod "pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011731426s STEP: Saw pod success May 15 22:07:44.747: INFO: Pod "pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0" satisfied condition "success or failure" May 15 22:07:44.750: INFO: Trying to get logs from node jerma-worker pod pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0 container test-container: STEP: delete the pod May 15 22:07:44.770: INFO: Waiting for pod pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0 to disappear May 15 22:07:44.775: INFO: Pod pod-afecf9eb-bcc0-494a-85b8-8538d3ac80a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2129" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3061,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:44.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:07:44.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248" in namespace "projected-3701" to be "success or failure" May 15 22:07:44.978: INFO: Pod "downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248": Phase="Pending", Reason="", readiness=false. Elapsed: 68.851866ms May 15 22:07:47.080: INFO: Pod "downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170450644s May 15 22:07:49.139: INFO: Pod "downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228974943s STEP: Saw pod success May 15 22:07:49.139: INFO: Pod "downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248" satisfied condition "success or failure" May 15 22:07:49.142: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248 container client-container: STEP: delete the pod May 15 22:07:49.224: INFO: Waiting for pod downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248 to disappear May 15 22:07:49.236: INFO: Pod downwardapi-volume-8fcfbcb4-edf2-4566-ade1-22c288a2e248 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:07:49.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3701" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3064,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:07:49.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 22:07:49.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1577' May 15 22:07:49.738: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 22:07:49.738: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 15 22:07:49.752: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 15 22:07:49.772: INFO: scanned /root for discovery docs: May 15 22:07:49.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1577' May 15 22:08:05.611: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 15 22:08:05.611: INFO: stdout: "Created e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25\nScaling up e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 15 22:08:05.611: INFO: stdout: "Created e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25\nScaling up e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 15 22:08:05.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-1577' May 15 22:08:05.703: INFO: stderr: "" May 15 22:08:05.703: INFO: stdout: "e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25-zbpnk " May 15 22:08:05.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25-zbpnk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1577' May 15 22:08:05.793: INFO: stderr: "" May 15 22:08:05.793: INFO: stdout: "true" May 15 22:08:05.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25-zbpnk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1577' May 15 22:08:05.926: INFO: stderr: "" May 15 22:08:05.926: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 15 22:08:05.926: INFO: e2e-test-httpd-rc-3a0a9769288288a8cc9f118a48aa4b25-zbpnk is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 15 22:08:05.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1577' May 15 22:08:06.043: INFO: stderr: "" May 15 22:08:06.043: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:08:06.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1577" for this suite. • [SLOW TEST:16.826 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":188,"skipped":3072,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:08:06.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-768efd3f-95fb-47e4-9aa5-ba14d362abf3 STEP: Creating a pod to test consume secrets May 15 22:08:06.197: INFO: Waiting up to 5m0s for pod "pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc" in namespace "secrets-8693" to be "success or failure" May 15 22:08:06.214: INFO: Pod "pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.453652ms May 15 22:08:08.219: INFO: Pod "pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021893537s May 15 22:08:10.222: INFO: Pod "pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025419356s STEP: Saw pod success May 15 22:08:10.222: INFO: Pod "pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc" satisfied condition "success or failure" May 15 22:08:10.224: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc container secret-volume-test: STEP: delete the pod May 15 22:08:10.259: INFO: Waiting for pod pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc to disappear May 15 22:08:10.278: INFO: Pod pod-secrets-88ff94a6-17c8-47ab-b8bd-b344ab01f0bc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:08:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8693" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3082,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:08:10.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-4k4b STEP: Creating a pod to test atomic-volume-subpath May 15 22:08:10.676: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4k4b" in namespace "subpath-2543" to be "success or failure" May 15 22:08:10.686: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.532163ms May 15 22:08:12.690: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01332376s May 15 22:08:14.694: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.017509444s May 15 22:08:16.698: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 6.02211785s May 15 22:08:18.703: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 8.026564448s May 15 22:08:20.707: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 10.030548321s May 15 22:08:22.710: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 12.033979707s May 15 22:08:24.713: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 14.037258453s May 15 22:08:26.717: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 16.04081398s May 15 22:08:28.720: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 18.044295508s May 15 22:08:30.724: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 20.048239606s May 15 22:08:32.728: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Running", Reason="", readiness=true. Elapsed: 22.051829299s May 15 22:08:34.770: INFO: Pod "pod-subpath-test-secret-4k4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.093988869s STEP: Saw pod success May 15 22:08:34.770: INFO: Pod "pod-subpath-test-secret-4k4b" satisfied condition "success or failure" May 15 22:08:34.773: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-4k4b container test-container-subpath-secret-4k4b: STEP: delete the pod May 15 22:08:34.801: INFO: Waiting for pod pod-subpath-test-secret-4k4b to disappear May 15 22:08:34.818: INFO: Pod pod-subpath-test-secret-4k4b no longer exists STEP: Deleting pod pod-subpath-test-secret-4k4b May 15 22:08:34.818: INFO: Deleting pod "pod-subpath-test-secret-4k4b" in namespace "subpath-2543" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:08:34.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2543" for this suite. • [SLOW TEST:24.542 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":190,"skipped":3083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:08:34.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-19c7b7d6-8849-45a6-839f-1c377afc8eb2 in namespace container-probe-3604 May 15 22:08:39.128: INFO: Started pod busybox-19c7b7d6-8849-45a6-839f-1c377afc8eb2 in namespace container-probe-3604 STEP: checking the pod's current state and verifying that restartCount is present May 15 22:08:39.130: INFO: Initial restart count of pod busybox-19c7b7d6-8849-45a6-839f-1c377afc8eb2 is 0 May 15 22:09:31.424: INFO: Restart count of pod container-probe-3604/busybox-19c7b7d6-8849-45a6-839f-1c377afc8eb2 is now 1 (52.293734361s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:09:31.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3604" for this suite. • [SLOW TEST:56.646 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3116,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:09:31.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f1cc1644-6403-4e46-b4a5-2ced52e6a635 STEP: Creating a pod to test consume configMaps May 15 22:09:31.588: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4" in namespace "projected-8173" to be "success or failure" May 15 22:09:31.622: INFO: Pod "pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.693934ms May 15 22:09:33.772: INFO: Pod "pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183543396s May 15 22:09:35.776: INFO: Pod "pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187815208s STEP: Saw pod success May 15 22:09:35.776: INFO: Pod "pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4" satisfied condition "success or failure" May 15 22:09:35.780: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4 container projected-configmap-volume-test: STEP: delete the pod May 15 22:09:35.874: INFO: Waiting for pod pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4 to disappear May 15 22:09:35.878: INFO: Pod pod-projected-configmaps-5e02b4e6-997a-4947-ac80-b63a9c2c26b4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:09:35.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8173" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3117,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:09:35.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:09:35.953: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c" in namespace "security-context-test-5649" to be "success or failure" May 15 22:09:35.968: INFO: Pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.680264ms May 15 22:09:37.974: INFO: Pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020901291s May 15 22:09:40.035: INFO: Pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08180778s May 15 22:09:40.035: INFO: Pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c" satisfied condition "success or failure" May 15 22:09:40.042: INFO: Got logs for pod "busybox-privileged-false-c1536113-c3d0-4508-ac28-86013dc6818c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:09:40.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5649" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3120,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:09:40.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 15 22:09:46.885: INFO: 7 pods remaining May 15 22:09:46.886: INFO: 0 pods has nil DeletionTimestamp May 15 22:09:46.886: INFO: May 15 22:09:48.556: INFO: 0 pods remaining May 15 22:09:48.556: INFO: 0 pods has nil DeletionTimestamp May 15 22:09:48.556: INFO: STEP: Gathering metrics W0515 22:09:50.043524 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 15 22:09:50.043: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:09:50.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6873" for this suite. • [SLOW TEST:9.999 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":194,"skipped":3120,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:09:50.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3364 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3364 STEP: Deleting pre-stop pod May 15 22:10:03.654: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:03.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3364" for this suite. • [SLOW TEST:13.700 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":195,"skipped":3124,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:03.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c2bc344f-0525-429b-8b87-cd6e6cbb780d STEP: Creating a pod to test consume configMaps May 15 22:10:03.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e" in namespace "configmap-5551" to be "success or failure" May 15 22:10:03.849: INFO: Pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309381ms May 15 22:10:05.862: INFO: Pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015901815s May 15 22:10:07.864: INFO: Pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e": Phase="Running", Reason="", readiness=true. Elapsed: 4.018739955s May 15 22:10:09.873: INFO: Pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027616812s STEP: Saw pod success May 15 22:10:09.873: INFO: Pod "pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e" satisfied condition "success or failure" May 15 22:10:09.876: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e container configmap-volume-test: STEP: delete the pod May 15 22:10:09.921: INFO: Waiting for pod pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e to disappear May 15 22:10:09.934: INFO: Pod pod-configmaps-1e7c2373-780d-47c3-af17-63ba72a9ed2e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:09.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5551" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3132,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:09.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-052cd6e5-e728-423f-b457-0b07a442ef30 STEP: Creating secret with name s-test-opt-upd-9ab6d12a-5818-4fc5-96ed-82e2e7b06f45 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-052cd6e5-e728-423f-b457-0b07a442ef30 STEP: Updating secret s-test-opt-upd-9ab6d12a-5818-4fc5-96ed-82e2e7b06f45 STEP: Creating secret with name s-test-opt-create-f31c64fe-4d75-4123-850b-c6e97de15583 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:18.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3439" for this suite. • [SLOW TEST:8.315 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3154,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:18.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:10:19.559: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:10:21.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:10:24.741: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:24.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4606" for this suite. STEP: Destroying namespace "webhook-4606-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.879 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":198,"skipped":3161,"failed":0} [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:25.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 15 22:10:25.338: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-watch-closed 679ccb60-a9b7-4dc0-89da-14a2f46f4cd1 16482992 0 2020-05-15 22:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 15 22:10:25.338: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-watch-closed 679ccb60-a9b7-4dc0-89da-14a2f46f4cd1 16482993 0 2020-05-15 22:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 15 22:10:25.350: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-watch-closed 679ccb60-a9b7-4dc0-89da-14a2f46f4cd1 16482994 0 2020-05-15 22:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 22:10:25.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2978 /api/v1/namespaces/watch-2978/configmaps/e2e-watch-test-watch-closed 679ccb60-a9b7-4dc0-89da-14a2f46f4cd1 16482995 0 2020-05-15 22:10:25 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:25.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2978" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":199,"skipped":3161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:25.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 15 22:10:40.055: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.055: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.089501 6 log.go:172] (0xc0053d6420) (0xc0011c99a0) Create stream I0515 22:10:40.089536 6 log.go:172] (0xc0053d6420) (0xc0011c99a0) Stream added, broadcasting: 1 I0515 22:10:40.091835 6 log.go:172] (0xc0053d6420) Reply frame received for 1 I0515 22:10:40.091883 6 log.go:172] (0xc0053d6420) (0xc0011c9a40) Create stream I0515 22:10:40.091896 6 log.go:172] (0xc0053d6420) (0xc0011c9a40) Stream added, broadcasting: 3 I0515 22:10:40.092974 6 log.go:172] (0xc0053d6420) Reply frame received for 3 I0515 22:10:40.093013 6 log.go:172] (0xc0053d6420) (0xc0011c9ae0) Create stream I0515 22:10:40.093026 6 log.go:172] (0xc0053d6420) (0xc0011c9ae0) Stream added, broadcasting: 5 I0515 22:10:40.094174 6 log.go:172] (0xc0053d6420) Reply frame received for 5 I0515 22:10:40.158363 6 log.go:172] (0xc0053d6420) Data frame received for 5 I0515 22:10:40.158391 6 log.go:172] (0xc0011c9ae0) (5) Data frame handling I0515 22:10:40.158407 6 log.go:172] (0xc0053d6420) Data frame received for 3 I0515 22:10:40.158412 6 log.go:172] (0xc0011c9a40) (3) Data frame handling I0515 22:10:40.158419 6 log.go:172] (0xc0011c9a40) (3) Data frame sent I0515 22:10:40.158436 6 log.go:172] (0xc0053d6420) Data frame received for 3 I0515 22:10:40.158442 6 log.go:172] (0xc0011c9a40) (3) Data frame handling I0515 22:10:40.159901 6 log.go:172] (0xc0053d6420) Data frame received for 1 I0515 22:10:40.159920 6 log.go:172] (0xc0011c99a0) (1) Data frame handling I0515 22:10:40.159931 6 log.go:172] (0xc0011c99a0) (1) Data frame sent I0515 22:10:40.159960 6 log.go:172] (0xc0053d6420) (0xc0011c99a0) Stream removed, broadcasting: 1 I0515 22:10:40.159984 6 log.go:172] (0xc0053d6420) Go away received I0515 22:10:40.160064 6 log.go:172] (0xc0053d6420) (0xc0011c99a0) Stream removed, broadcasting: 1 I0515 22:10:40.160092 6 log.go:172] (0xc0053d6420) (0xc0011c9a40) Stream removed, broadcasting: 3 I0515 22:10:40.160105 6 log.go:172] (0xc0053d6420) (0xc0011c9ae0) Stream removed, broadcasting: 5 May 15 22:10:40.160: INFO: Exec stderr: "" May 15 22:10:40.160: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.160: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.191941 6 log.go:172] (0xc000c2a370) (0xc0016a05a0) Create stream I0515 22:10:40.191967 6 log.go:172] (0xc000c2a370) (0xc0016a05a0) Stream added, broadcasting: 1 I0515 22:10:40.194693 6 log.go:172] (0xc000c2a370) Reply frame received for 1 I0515 22:10:40.194729 6 log.go:172] (0xc000c2a370) (0xc001ae43c0) Create stream I0515 22:10:40.194744 6 log.go:172] (0xc000c2a370) (0xc001ae43c0) Stream added, broadcasting: 3 I0515 22:10:40.195747 6 log.go:172] (0xc000c2a370) Reply frame received for 3 I0515 22:10:40.195788 6 log.go:172] (0xc000c2a370) (0xc001ae4460) Create stream I0515 22:10:40.195806 6 log.go:172] (0xc000c2a370) (0xc001ae4460) Stream added, broadcasting: 5 I0515 22:10:40.196872 6 log.go:172] (0xc000c2a370) Reply frame received for 5 I0515 22:10:40.263502 6 log.go:172] (0xc000c2a370) Data frame received for 5 I0515 22:10:40.263551 6 log.go:172] (0xc001ae4460) (5) Data frame handling I0515 22:10:40.263583 6 log.go:172] (0xc000c2a370) Data frame received for 3 I0515 22:10:40.263594 6 log.go:172] (0xc001ae43c0) (3) Data frame handling I0515 22:10:40.263602 6 log.go:172] (0xc001ae43c0) (3) Data frame sent I0515 22:10:40.263611 6 log.go:172] (0xc000c2a370) Data frame received for 3 I0515 22:10:40.263622 6 log.go:172] (0xc001ae43c0) (3) Data frame handling I0515 22:10:40.264887 6 log.go:172] (0xc000c2a370) Data frame received for 1 I0515 22:10:40.264902 6 log.go:172] (0xc0016a05a0) (1) Data frame handling I0515 22:10:40.264921 6 log.go:172] (0xc0016a05a0) (1) Data frame sent I0515 22:10:40.264941 6 log.go:172] (0xc000c2a370) (0xc0016a05a0) Stream removed, broadcasting: 1 I0515 22:10:40.264956 6 log.go:172] (0xc000c2a370) Go away received I0515 22:10:40.265086 6 log.go:172] (0xc000c2a370) (0xc0016a05a0) Stream removed, broadcasting: 1 I0515 22:10:40.265105 6 log.go:172] (0xc000c2a370) (0xc001ae43c0) Stream removed, broadcasting: 3 I0515 22:10:40.265356 6 log.go:172] (0xc000c2a370) (0xc001ae4460) Stream removed, broadcasting: 5 May 15 22:10:40.265: INFO: Exec stderr: "" May 15 22:10:40.265: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.265: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.295925 6 log.go:172] (0xc002cded10) (0xc001ae4aa0) Create stream I0515 22:10:40.295961 6 log.go:172] (0xc002cded10) (0xc001ae4aa0) Stream added, broadcasting: 1 I0515 22:10:40.298132 6 log.go:172] (0xc002cded10) Reply frame received for 1 I0515 22:10:40.298169 6 log.go:172] (0xc002cded10) (0xc001ae4b40) Create stream I0515 22:10:40.298179 6 log.go:172] (0xc002cded10) (0xc001ae4b40) Stream added, broadcasting: 3 I0515 22:10:40.298969 6 log.go:172] (0xc002cded10) Reply frame received for 3 I0515 22:10:40.299002 6 log.go:172] (0xc002cded10) (0xc0023428c0) Create stream I0515 22:10:40.299013 6 log.go:172] (0xc002cded10) (0xc0023428c0) Stream added, broadcasting: 5 I0515 22:10:40.299829 6 log.go:172] (0xc002cded10) Reply frame received for 5 I0515 22:10:40.360750 6 log.go:172] (0xc002cded10) Data frame received for 5 I0515 22:10:40.360784 6 log.go:172] (0xc0023428c0) (5) Data frame handling I0515 22:10:40.360804 6 log.go:172] (0xc002cded10) Data frame received for 3 I0515 22:10:40.360813 6 log.go:172] (0xc001ae4b40) (3) Data frame handling I0515 22:10:40.360828 6 log.go:172] (0xc001ae4b40) (3) Data frame sent I0515 22:10:40.360839 6 log.go:172] (0xc002cded10) Data frame received for 3 I0515 22:10:40.360848 6 log.go:172] (0xc001ae4b40) (3) Data frame handling I0515 22:10:40.362828 6 log.go:172] (0xc002cded10) Data frame received for 1 I0515 22:10:40.362860 6 log.go:172] (0xc001ae4aa0) (1) Data frame handling I0515 22:10:40.362875 6 log.go:172] (0xc001ae4aa0) (1) Data frame sent I0515 22:10:40.362893 6 log.go:172] (0xc002cded10) (0xc001ae4aa0) Stream removed, broadcasting: 1 I0515 22:10:40.362995 6 log.go:172] (0xc002cded10) (0xc001ae4aa0) Stream removed, broadcasting: 1 I0515 22:10:40.363013 6 log.go:172] (0xc002cded10) (0xc001ae4b40) Stream removed, broadcasting: 3 I0515 22:10:40.363023 6 log.go:172] (0xc002cded10) (0xc0023428c0) Stream removed, broadcasting: 5 May 15 22:10:40.363: INFO: Exec stderr: "" May 15 22:10:40.363: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.363: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.363149 6 log.go:172] (0xc002cded10) Go away received I0515 22:10:40.395337 6 log.go:172] (0xc0053d6a50) (0xc0011c9f40) Create stream I0515 22:10:40.395368 6 log.go:172] (0xc0053d6a50) (0xc0011c9f40) Stream added, broadcasting: 1 I0515 22:10:40.397369 6 log.go:172] (0xc0053d6a50) Reply frame received for 1 I0515 22:10:40.397411 6 log.go:172] (0xc0053d6a50) (0xc0016a06e0) Create stream I0515 22:10:40.397425 6 log.go:172] (0xc0053d6a50) (0xc0016a06e0) Stream added, broadcasting: 3 I0515 22:10:40.398195 6 log.go:172] (0xc0053d6a50) Reply frame received for 3 I0515 22:10:40.398235 6 log.go:172] (0xc0053d6a50) (0xc001ae4be0) Create stream I0515 22:10:40.398252 6 log.go:172] (0xc0053d6a50) (0xc001ae4be0) Stream added, broadcasting: 5 I0515 22:10:40.398971 6 log.go:172] (0xc0053d6a50) Reply frame received for 5 I0515 22:10:40.474166 6 log.go:172] (0xc0053d6a50) Data frame received for 5 I0515 22:10:40.474193 6 log.go:172] (0xc001ae4be0) (5) Data frame handling I0515 22:10:40.474215 6 log.go:172] (0xc0053d6a50) Data frame received for 3 I0515 22:10:40.474223 6 log.go:172] (0xc0016a06e0) (3) Data frame handling I0515 22:10:40.474233 6 log.go:172] (0xc0016a06e0) (3) Data frame sent I0515 22:10:40.474240 6 log.go:172] (0xc0053d6a50) Data frame received for 3 I0515 22:10:40.474246 6 log.go:172] (0xc0016a06e0) (3) Data frame handling I0515 22:10:40.475487 6 log.go:172] (0xc0053d6a50) Data frame received for 1 I0515 22:10:40.475503 6 log.go:172] (0xc0011c9f40) (1) Data frame handling I0515 22:10:40.475524 6 log.go:172] (0xc0011c9f40) (1) Data frame sent I0515 22:10:40.475537 6 log.go:172] (0xc0053d6a50) (0xc0011c9f40) Stream removed, broadcasting: 1 I0515 22:10:40.475547 6 log.go:172] (0xc0053d6a50) Go away received I0515 22:10:40.475710 6 log.go:172] (0xc0053d6a50) (0xc0011c9f40) Stream removed, broadcasting: 1 I0515 22:10:40.475742 6 log.go:172] (0xc0053d6a50) (0xc0016a06e0) Stream removed, broadcasting: 3 I0515 22:10:40.475754 6 log.go:172] (0xc0053d6a50) (0xc001ae4be0) Stream removed, broadcasting: 5 May 15 22:10:40.475: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 15 22:10:40.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.475: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.506432 6 log.go:172] (0xc002cdf340) (0xc001ae4f00) Create stream I0515 22:10:40.506461 6 log.go:172] (0xc002cdf340) (0xc001ae4f00) Stream added, broadcasting: 1 I0515 22:10:40.508512 6 log.go:172] (0xc002cdf340) Reply frame received for 1 I0515 22:10:40.508542 6 log.go:172] (0xc002cdf340) (0xc002342a00) Create stream I0515 22:10:40.508551 6 log.go:172] (0xc002cdf340) (0xc002342a00) Stream added, broadcasting: 3 I0515 22:10:40.509615 6 log.go:172] (0xc002cdf340) Reply frame received for 3 I0515 22:10:40.509643 6 log.go:172] (0xc002cdf340) (0xc002342b40) Create stream I0515 22:10:40.509653 6 log.go:172] (0xc002cdf340) (0xc002342b40) Stream added, broadcasting: 5 I0515 22:10:40.510290 6 log.go:172] (0xc002cdf340) Reply frame received for 5 I0515 22:10:40.574781 6 log.go:172] (0xc002cdf340) Data frame received for 5 I0515 22:10:40.574821 6 log.go:172] (0xc002342b40) (5) Data frame handling I0515 22:10:40.574847 6 log.go:172] (0xc002cdf340) Data frame received for 3 I0515 22:10:40.574860 6 log.go:172] (0xc002342a00) (3) Data frame handling I0515 22:10:40.574878 6 log.go:172] (0xc002342a00) (3) Data frame sent I0515 22:10:40.574889 6 log.go:172] (0xc002cdf340) Data frame received for 3 I0515 22:10:40.574898 6 log.go:172] (0xc002342a00) (3) Data frame handling I0515 22:10:40.576610 6 log.go:172] (0xc002cdf340) Data frame received for 1 I0515 22:10:40.576641 6 log.go:172] (0xc001ae4f00) (1) Data frame handling I0515 22:10:40.576665 6 log.go:172] (0xc001ae4f00) (1) Data frame sent I0515 22:10:40.576687 6 log.go:172] (0xc002cdf340) (0xc001ae4f00) Stream removed, broadcasting: 1 I0515 22:10:40.576699 6 log.go:172] (0xc002cdf340) Go away received I0515 22:10:40.576843 6 log.go:172] (0xc002cdf340) (0xc001ae4f00) Stream removed, broadcasting: 1 I0515 22:10:40.576915 6 log.go:172] (0xc002cdf340) (0xc002342a00) Stream removed, broadcasting: 3 I0515 22:10:40.576960 6 log.go:172] (0xc002cdf340) (0xc002342b40) Stream removed, broadcasting: 5 May 15 22:10:40.576: INFO: Exec stderr: "" May 15 22:10:40.577: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.577: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.604210 6 log.go:172] (0xc002432630) (0xc002343360) Create stream I0515 22:10:40.604239 6 log.go:172] (0xc002432630) (0xc002343360) Stream added, broadcasting: 1 I0515 22:10:40.611571 6 log.go:172] (0xc002432630) Reply frame received for 1 I0515 22:10:40.611627 6 log.go:172] (0xc002432630) (0xc001ae5040) Create stream I0515 22:10:40.611647 6 log.go:172] (0xc002432630) (0xc001ae5040) Stream added, broadcasting: 3 I0515 22:10:40.613038 6 log.go:172] (0xc002432630) Reply frame received for 3 I0515 22:10:40.613095 6 log.go:172] (0xc002432630) (0xc002343400) Create stream I0515 22:10:40.613273 6 log.go:172] (0xc002432630) (0xc002343400) Stream added, broadcasting: 5 I0515 22:10:40.614866 6 log.go:172] (0xc002432630) Reply frame received for 5 I0515 22:10:40.691244 6 log.go:172] (0xc002432630) Data frame received for 5 I0515 22:10:40.691266 6 log.go:172] (0xc002343400) (5) Data frame handling I0515 22:10:40.691295 6 log.go:172] (0xc002432630) Data frame received for 3 I0515 22:10:40.691327 6 log.go:172] (0xc001ae5040) (3) Data frame handling I0515 22:10:40.691352 6 log.go:172] (0xc001ae5040) (3) Data frame sent I0515 22:10:40.691363 6 log.go:172] (0xc002432630) Data frame received for 3 I0515 22:10:40.691374 6 log.go:172] (0xc001ae5040) (3) Data frame handling I0515 22:10:40.692657 6 log.go:172] (0xc002432630) Data frame received for 1 I0515 22:10:40.692683 6 log.go:172] (0xc002343360) (1) Data frame handling I0515 22:10:40.692704 6 log.go:172] (0xc002343360) (1) Data frame sent I0515 22:10:40.692719 6 log.go:172] (0xc002432630) (0xc002343360) Stream removed, broadcasting: 1 I0515 22:10:40.692777 6 log.go:172] (0xc002432630) Go away received I0515 22:10:40.692856 6 log.go:172] (0xc002432630) (0xc002343360) Stream removed, broadcasting: 1 I0515 22:10:40.692880 6 log.go:172] (0xc002432630) (0xc001ae5040) Stream removed, broadcasting: 3 I0515 22:10:40.692905 6 log.go:172] (0xc002432630) (0xc002343400) Stream removed, broadcasting: 5 May 15 22:10:40.692: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 15 22:10:40.692: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.693: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.724789 6 log.go:172] (0xc002cdf970) (0xc001ae5540) Create stream I0515 22:10:40.724823 6 log.go:172] (0xc002cdf970) (0xc001ae5540) Stream added, broadcasting: 1 I0515 22:10:40.727152 6 log.go:172] (0xc002cdf970) Reply frame received for 1 I0515 22:10:40.727197 6 log.go:172] (0xc002cdf970) (0xc000cdab40) Create stream I0515 22:10:40.727212 6 log.go:172] (0xc002cdf970) (0xc000cdab40) Stream added, broadcasting: 3 I0515 22:10:40.728301 6 log.go:172] (0xc002cdf970) Reply frame received for 3 I0515 22:10:40.728341 6 log.go:172] (0xc002cdf970) (0xc0023434a0) Create stream I0515 22:10:40.728354 6 log.go:172] (0xc002cdf970) (0xc0023434a0) Stream added, broadcasting: 5 I0515 22:10:40.729570 6 log.go:172] (0xc002cdf970) Reply frame received for 5 I0515 22:10:40.797283 6 log.go:172] (0xc002cdf970) Data frame received for 3 I0515 22:10:40.797317 6 log.go:172] (0xc000cdab40) (3) Data frame handling I0515 22:10:40.797329 6 log.go:172] (0xc000cdab40) (3) Data frame sent I0515 22:10:40.797405 6 log.go:172] (0xc002cdf970) Data frame received for 5 I0515 22:10:40.797451 6 log.go:172] (0xc0023434a0) (5) Data frame handling I0515 22:10:40.797465 6 log.go:172] (0xc002cdf970) Data frame received for 3 I0515 22:10:40.797478 6 log.go:172] (0xc000cdab40) (3) Data frame handling I0515 22:10:40.799152 6 log.go:172] (0xc002cdf970) Data frame received for 1 I0515 22:10:40.799196 6 log.go:172] (0xc001ae5540) (1) Data frame handling I0515 22:10:40.799234 6 log.go:172] (0xc001ae5540) (1) Data frame sent I0515 22:10:40.799252 6 log.go:172] (0xc002cdf970) (0xc001ae5540) Stream removed, broadcasting: 1 I0515 22:10:40.799285 6 log.go:172] (0xc002cdf970) Go away received I0515 22:10:40.799375 6 log.go:172] (0xc002cdf970) (0xc001ae5540) Stream removed, broadcasting: 1 I0515 22:10:40.799395 6 log.go:172] (0xc002cdf970) (0xc000cdab40) Stream removed, broadcasting: 3 I0515 22:10:40.799406 6 log.go:172] (0xc002cdf970) (0xc0023434a0) Stream removed, broadcasting: 5 May 15 22:10:40.799: INFO: Exec stderr: "" May 15 22:10:40.799: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.799: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.834769 6 log.go:172] (0xc001385e40) (0xc000cdb180) Create stream I0515 22:10:40.834792 6 log.go:172] (0xc001385e40) (0xc000cdb180) Stream added, broadcasting: 1 I0515 22:10:40.836284 6 log.go:172] (0xc001385e40) Reply frame received for 1 I0515 22:10:40.836321 6 log.go:172] (0xc001385e40) (0xc001ae5680) Create stream I0515 22:10:40.836333 6 log.go:172] (0xc001385e40) (0xc001ae5680) Stream added, broadcasting: 3 I0515 22:10:40.836974 6 log.go:172] (0xc001385e40) Reply frame received for 3 I0515 22:10:40.836996 6 log.go:172] (0xc001385e40) (0xc001f30000) Create stream I0515 22:10:40.837004 6 log.go:172] (0xc001385e40) (0xc001f30000) Stream added, broadcasting: 5 I0515 22:10:40.837708 6 log.go:172] (0xc001385e40) Reply frame received for 5 I0515 22:10:40.902481 6 log.go:172] (0xc001385e40) Data frame received for 5 I0515 22:10:40.902513 6 log.go:172] (0xc001f30000) (5) Data frame handling I0515 22:10:40.902540 6 log.go:172] (0xc001385e40) Data frame received for 3 I0515 22:10:40.902551 6 log.go:172] (0xc001ae5680) (3) Data frame handling I0515 22:10:40.902561 6 log.go:172] (0xc001ae5680) (3) Data frame sent I0515 22:10:40.902601 6 log.go:172] (0xc001385e40) Data frame received for 3 I0515 22:10:40.902620 6 log.go:172] (0xc001ae5680) (3) Data frame handling I0515 22:10:40.904279 6 log.go:172] (0xc001385e40) Data frame received for 1 I0515 22:10:40.904297 6 log.go:172] (0xc000cdb180) (1) Data frame handling I0515 22:10:40.904305 6 log.go:172] (0xc000cdb180) (1) Data frame sent I0515 22:10:40.904313 6 log.go:172] (0xc001385e40) (0xc000cdb180) Stream removed, broadcasting: 1 I0515 22:10:40.904321 6 log.go:172] (0xc001385e40) Go away received I0515 22:10:40.904510 6 log.go:172] (0xc001385e40) (0xc000cdb180) Stream removed, broadcasting: 1 I0515 22:10:40.904536 6 log.go:172] (0xc001385e40) (0xc001ae5680) Stream removed, broadcasting: 3 I0515 22:10:40.904554 6 log.go:172] (0xc001385e40) (0xc001f30000) Stream removed, broadcasting: 5 May 15 22:10:40.904: INFO: Exec stderr: "" May 15 22:10:40.904: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:40.904: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:40.937685 6 log.go:172] (0xc0053d7080) (0xc001f30780) Create stream I0515 22:10:40.937712 6 log.go:172] (0xc0053d7080) (0xc001f30780) Stream added, broadcasting: 1 I0515 22:10:40.940120 6 log.go:172] (0xc0053d7080) Reply frame received for 1 I0515 22:10:40.940170 6 log.go:172] (0xc0053d7080) (0xc0023435e0) Create stream I0515 22:10:40.940195 6 log.go:172] (0xc0053d7080) (0xc0023435e0) Stream added, broadcasting: 3 I0515 22:10:40.941966 6 log.go:172] (0xc0053d7080) Reply frame received for 3 I0515 22:10:40.942013 6 log.go:172] (0xc0053d7080) (0xc002343680) Create stream I0515 22:10:40.942026 6 log.go:172] (0xc0053d7080) (0xc002343680) Stream added, broadcasting: 5 I0515 22:10:40.943282 6 log.go:172] (0xc0053d7080) Reply frame received for 5 I0515 22:10:41.002563 6 log.go:172] (0xc0053d7080) Data frame received for 5 I0515 22:10:41.002588 6 log.go:172] (0xc002343680) (5) Data frame handling I0515 22:10:41.002618 6 log.go:172] (0xc0053d7080) Data frame received for 3 I0515 22:10:41.002650 6 log.go:172] (0xc0023435e0) (3) Data frame handling I0515 22:10:41.002672 6 log.go:172] (0xc0023435e0) (3) Data frame sent I0515 22:10:41.002697 6 log.go:172] (0xc0053d7080) Data frame received for 3 I0515 22:10:41.002722 6 log.go:172] (0xc0023435e0) (3) Data frame handling I0515 22:10:41.004154 6 log.go:172] (0xc0053d7080) Data frame received for 1 I0515 22:10:41.004188 6 log.go:172] (0xc001f30780) (1) Data frame handling I0515 22:10:41.004213 6 log.go:172] (0xc001f30780) (1) Data frame sent I0515 22:10:41.004245 6 log.go:172] (0xc0053d7080) (0xc001f30780) Stream removed, broadcasting: 1 I0515 22:10:41.004269 6 log.go:172] (0xc0053d7080) Go away received I0515 22:10:41.004394 6 log.go:172] (0xc0053d7080) (0xc001f30780) Stream removed, broadcasting: 1 I0515 22:10:41.004413 6 log.go:172] (0xc0053d7080) (0xc0023435e0) Stream removed, broadcasting: 3 I0515 22:10:41.004423 6 log.go:172] (0xc0053d7080) (0xc002343680) Stream removed, broadcasting: 5 May 15 22:10:41.004: INFO: Exec stderr: "" May 15 22:10:41.004: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3812 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:10:41.004: INFO: >>> kubeConfig: /root/.kube/config I0515 22:10:41.030956 6 log.go:172] (0xc002330210) (0xc0011c80a0) Create stream I0515 22:10:41.031014 6 log.go:172] (0xc002330210) (0xc0011c80a0) Stream added, broadcasting: 1 I0515 22:10:41.032692 6 log.go:172] (0xc002330210) Reply frame received for 1 I0515 22:10:41.032725 6 log.go:172] (0xc002330210) (0xc00190c0a0) Create stream I0515 22:10:41.032742 6 log.go:172] (0xc002330210) (0xc00190c0a0) Stream added, broadcasting: 3 I0515 22:10:41.033672 6 log.go:172] (0xc002330210) Reply frame received for 3 I0515 22:10:41.033704 6 log.go:172] (0xc002330210) (0xc00190c140) Create stream I0515 22:10:41.033715 6 log.go:172] (0xc002330210) (0xc00190c140) Stream added, broadcasting: 5 I0515 22:10:41.034537 6 log.go:172] (0xc002330210) Reply frame received for 5 I0515 22:10:41.086667 6 log.go:172] (0xc002330210) Data frame received for 3 I0515 22:10:41.086703 6 log.go:172] (0xc00190c0a0) (3) Data frame handling I0515 22:10:41.086740 6 log.go:172] (0xc00190c0a0) (3) Data frame sent I0515 22:10:41.086757 6 log.go:172] (0xc002330210) Data frame received for 3 I0515 22:10:41.086769 6 log.go:172] (0xc00190c0a0) (3) Data frame handling I0515 22:10:41.086878 6 log.go:172] (0xc002330210) Data frame received for 5 I0515 22:10:41.086904 6 log.go:172] (0xc00190c140) (5) Data frame handling I0515 22:10:41.088345 6 log.go:172] (0xc002330210) Data frame received for 1 I0515 22:10:41.088378 6 log.go:172] (0xc0011c80a0) (1) Data frame handling I0515 22:10:41.088391 6 log.go:172] (0xc0011c80a0) (1) Data frame sent I0515 22:10:41.088405 6 log.go:172] (0xc002330210) (0xc0011c80a0) Stream removed, broadcasting: 1 I0515 22:10:41.088422 6 log.go:172] (0xc002330210) Go away received I0515 22:10:41.088565 6 log.go:172] (0xc002330210) (0xc0011c80a0) Stream removed, broadcasting: 1 I0515 22:10:41.088589 6 log.go:172] (0xc002330210) (0xc00190c0a0) Stream removed, broadcasting: 3 I0515 22:10:41.088609 6 log.go:172] (0xc002330210) (0xc00190c140) Stream removed, broadcasting: 5 May 15 22:10:41.088: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:41.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3812" for this suite. • [SLOW TEST:15.737 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:41.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 15 22:10:41.190: INFO: Waiting up to 5m0s for pod "client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c" in namespace "containers-256" to be "success or failure" May 15 22:10:41.194: INFO: Pod "client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382458ms May 15 22:10:43.215: INFO: Pod "client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024885406s May 15 22:10:45.287: INFO: Pod "client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096990787s STEP: Saw pod success May 15 22:10:45.287: INFO: Pod "client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c" satisfied condition "success or failure" May 15 22:10:45.290: INFO: Trying to get logs from node jerma-worker pod client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c container test-container: STEP: delete the pod May 15 22:10:45.338: INFO: Waiting for pod client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c to disappear May 15 22:10:45.509: INFO: Pod client-containers-221dd10a-f55a-4427-a35b-1cf570b1590c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:45.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-256" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:45.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 15 22:10:50.478: INFO: Successfully updated pod "pod-update-0ecc1379-1f23-4904-a835-a132917e13b2" STEP: verifying the updated pod is in kubernetes May 15 22:10:50.619: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:50.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4466" for this suite. • [SLOW TEST:5.118 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3248,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:50.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 15 22:10:51.253: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 15 22:10:53.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 22:10:55.319: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177451, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:10:58.492: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:10:58.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:10:59.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6987" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.272 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":203,"skipped":3255,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:10:59.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e0dab724-99e9-4df1-94d0-2dfde87bca81 STEP: Creating a pod to test consume configMaps May 15 22:10:59.963: INFO: Waiting up to 5m0s for pod "pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97" in namespace "configmap-9480" to be "success or failure" May 15 22:11:00.012: INFO: Pod "pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97": Phase="Pending", Reason="", readiness=false. Elapsed: 49.073918ms May 15 22:11:02.016: INFO: Pod "pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053258539s May 15 22:11:04.048: INFO: Pod "pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085163902s STEP: Saw pod success May 15 22:11:04.048: INFO: Pod "pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97" satisfied condition "success or failure" May 15 22:11:04.051: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97 container configmap-volume-test: STEP: delete the pod May 15 22:11:04.092: INFO: Waiting for pod pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97 to disappear May 15 22:11:04.098: INFO: Pod pod-configmaps-28b331cf-3b68-4906-8d09-954674823b97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:11:04.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9480" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:11:04.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 15 22:11:04.181: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 15 22:11:15.543: INFO: >>> kubeConfig: /root/.kube/config May 15 22:11:18.439: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:11:30.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4526" for this suite. • [SLOW TEST:26.176 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":205,"skipped":3295,"failed":0} SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:11:30.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-8818d38a-4ed9-426f-9a03-514ec64ed4c2 in namespace container-probe-8212 May 15 22:11:34.364: INFO: Started pod test-webserver-8818d38a-4ed9-426f-9a03-514ec64ed4c2 in namespace container-probe-8212 STEP: checking the pod's current state and verifying that restartCount is present May 15 22:11:34.367: INFO: Initial restart count of pod test-webserver-8818d38a-4ed9-426f-9a03-514ec64ed4c2 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:15:35.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8212" for this suite. • [SLOW TEST:245.199 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3298,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:15:35.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 15 22:15:36.408: INFO: created pod pod-service-account-defaultsa May 15 22:15:36.408: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 15 22:15:36.501: INFO: created pod pod-service-account-mountsa May 15 22:15:36.501: INFO: pod pod-service-account-mountsa service account token volume mount: true May 15 22:15:36.504: INFO: created pod pod-service-account-nomountsa May 15 22:15:36.504: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 15 22:15:36.547: INFO: created pod pod-service-account-defaultsa-mountspec May 15 22:15:36.547: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 15 22:15:36.559: INFO: created pod pod-service-account-mountsa-mountspec May 15 22:15:36.559: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 15 22:15:36.655: INFO: created pod pod-service-account-nomountsa-mountspec May 15 22:15:36.655: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 15 22:15:36.673: INFO: created pod pod-service-account-defaultsa-nomountspec May 15 22:15:36.673: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 15 22:15:36.694: INFO: created pod pod-service-account-mountsa-nomountspec May 15 22:15:36.694: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 15 22:15:36.741: INFO: created pod pod-service-account-nomountsa-nomountspec May 15 22:15:36.741: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:15:36.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6905" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":207,"skipped":3301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:15:36.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 15 22:15:50.568: INFO: Successfully updated pod "labelsupdate5346b700-8c69-489c-be33-83cad3b8136e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:15:52.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4357" for this suite. • [SLOW TEST:15.970 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3361,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:15:52.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 15 22:15:53.059: INFO: Waiting up to 5m0s for pod "var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6" in namespace "var-expansion-4093" to be "success or failure" May 15 22:15:53.074: INFO: Pod "var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.461007ms May 15 22:15:55.171: INFO: Pod "var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111878166s May 15 22:15:57.175: INFO: Pod "var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116379712s STEP: Saw pod success May 15 22:15:57.175: INFO: Pod "var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6" satisfied condition "success or failure" May 15 22:15:57.179: INFO: Trying to get logs from node jerma-worker pod var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6 container dapi-container: STEP: delete the pod May 15 22:15:57.273: INFO: Waiting for pod var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6 to disappear May 15 22:15:57.308: INFO: Pod var-expansion-4e4044e0-8286-4188-9ab3-291b58df52b6 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:15:57.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4093" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:15:57.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 15 22:15:57.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:15:57.535: INFO: Number of nodes with available pods: 0 May 15 22:15:57.535: INFO: Node jerma-worker is running more than one daemon pod May 15 22:15:58.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:15:58.544: INFO: Number of nodes with available pods: 0 May 15 22:15:58.544: INFO: Node jerma-worker is running more than one daemon pod May 15 22:15:59.557: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:15:59.567: INFO: Number of nodes with available pods: 0 May 15 22:15:59.567: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:00.540: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:00.544: INFO: Number of nodes with available pods: 0 May 15 22:16:00.544: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:01.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:01.554: INFO: Number of nodes with available pods: 0 May 15 22:16:01.554: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:02.538: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:02.544: INFO: Number of nodes with available pods: 2 May 15 22:16:02.544: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 15 22:16:02.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:02.628: INFO: Number of nodes with available pods: 1 May 15 22:16:02.628: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:03.640: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:03.670: INFO: Number of nodes with available pods: 1 May 15 22:16:03.671: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:04.736: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:04.739: INFO: Number of nodes with available pods: 1 May 15 22:16:04.739: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:05.634: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:05.637: INFO: Number of nodes with available pods: 1 May 15 22:16:05.637: INFO: Node jerma-worker is running more than one daemon pod May 15 22:16:06.634: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 15 22:16:06.637: INFO: Number of nodes with available pods: 2 May 15 22:16:06.637: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9110, will wait for the garbage collector to delete the pods May 15 22:16:06.701: INFO: Deleting DaemonSet.extensions daemon-set took: 6.461383ms May 15 22:16:08.801: INFO: Terminating DaemonSet.extensions daemon-set pods took: 2.100322899s May 15 22:16:11.706: INFO: Number of nodes with available pods: 0 May 15 22:16:11.706: INFO: Number of running nodes: 0, number of available pods: 0 May 15 22:16:11.708: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9110/daemonsets","resourceVersion":"16484492"},"items":null} May 15 22:16:11.711: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9110/pods","resourceVersion":"16484492"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:16:11.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9110" for this suite. • [SLOW TEST:14.388 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":210,"skipped":3410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:16:11.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-mjkc STEP: Creating a pod to test atomic-volume-subpath May 15 22:16:11.919: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mjkc" in namespace "subpath-4544" to be "success or failure" May 15 22:16:11.926: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623521ms May 15 22:16:13.931: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011326814s May 15 22:16:15.935: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 4.01538354s May 15 22:16:17.962: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 6.04245693s May 15 22:16:19.966: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 8.046759299s May 15 22:16:21.970: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 10.050751832s May 15 22:16:23.974: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 12.054835485s May 15 22:16:25.979: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 14.05963535s May 15 22:16:27.983: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 16.063303168s May 15 22:16:30.082: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 18.162931946s May 15 22:16:32.087: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 20.167119865s May 15 22:16:34.091: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Running", Reason="", readiness=true. Elapsed: 22.171820611s May 15 22:16:36.096: INFO: Pod "pod-subpath-test-projected-mjkc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.176124034s STEP: Saw pod success May 15 22:16:36.096: INFO: Pod "pod-subpath-test-projected-mjkc" satisfied condition "success or failure" May 15 22:16:36.098: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-mjkc container test-container-subpath-projected-mjkc: STEP: delete the pod May 15 22:16:36.151: INFO: Waiting for pod pod-subpath-test-projected-mjkc to disappear May 15 22:16:36.195: INFO: Pod pod-subpath-test-projected-mjkc no longer exists STEP: Deleting pod pod-subpath-test-projected-mjkc May 15 22:16:36.195: INFO: Deleting pod "pod-subpath-test-projected-mjkc" in namespace "subpath-4544" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:16:36.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4544" for this suite. • [SLOW TEST:24.481 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":211,"skipped":3443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:16:36.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 15 22:16:36.271: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix150945945/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:16:36.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-586" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":212,"skipped":3483,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:16:36.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:16:36.412: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-5832 I0515 22:16:36.434930 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-5832, replica count: 1 I0515 22:16:37.485502 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:16:38.485733 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:16:39.485944 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:16:40.486147 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0515 22:16:41.486373 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 15 22:16:41.614: INFO: Created: latency-svc-rbgsq May 15 22:16:41.631: INFO: Got endpoints: latency-svc-rbgsq [45.070664ms] May 15 22:16:41.714: INFO: Created: latency-svc-zr98h May 15 22:16:41.742: INFO: Got endpoints: latency-svc-zr98h [110.971332ms] May 15 22:16:41.873: INFO: Created: latency-svc-k259w May 15 22:16:41.881: INFO: Got endpoints: latency-svc-k259w [249.625386ms] May 15 22:16:41.912: INFO: Created: latency-svc-gnwxp May 15 22:16:41.937: INFO: Got endpoints: latency-svc-gnwxp [305.740595ms] May 15 22:16:41.960: INFO: Created: latency-svc-2cbwl May 15 22:16:42.004: INFO: Got endpoints: latency-svc-2cbwl [372.203388ms] May 15 22:16:42.016: INFO: Created: latency-svc-q6hpb May 15 22:16:42.032: INFO: Got endpoints: latency-svc-q6hpb [400.832382ms] May 15 22:16:42.064: INFO: Created: latency-svc-tfcfk May 15 22:16:42.075: INFO: Got endpoints: latency-svc-tfcfk [443.281102ms] May 15 22:16:42.100: INFO: Created: latency-svc-2dwtv May 15 22:16:42.154: INFO: Got endpoints: latency-svc-2dwtv [522.423782ms] May 15 22:16:42.191: INFO: Created: latency-svc-h7w8w May 15 22:16:42.204: INFO: Got endpoints: latency-svc-h7w8w [572.839966ms] May 15 22:16:42.238: INFO: Created: latency-svc-c4zxx May 15 22:16:42.286: INFO: Got endpoints: latency-svc-c4zxx [654.060157ms] May 15 22:16:42.298: INFO: Created: latency-svc-rkvpx May 15 22:16:42.309: INFO: Got endpoints: latency-svc-rkvpx [677.900189ms] May 15 22:16:42.333: INFO: Created: latency-svc-8447w May 15 22:16:42.363: INFO: Got endpoints: latency-svc-8447w [731.223878ms] May 15 22:16:42.435: INFO: Created: latency-svc-hq69q May 15 22:16:42.438: INFO: Got endpoints: latency-svc-hq69q [806.46412ms] May 15 22:16:42.470: INFO: Created: latency-svc-z8j9m May 15 22:16:42.487: INFO: Got endpoints: latency-svc-z8j9m [855.767131ms] May 15 22:16:42.515: INFO: Created: latency-svc-9jn7t May 15 22:16:42.651: INFO: Got endpoints: latency-svc-9jn7t [1.019770778s] May 15 22:16:42.653: INFO: Created: latency-svc-d8chv May 15 22:16:42.668: INFO: Got endpoints: latency-svc-d8chv [1.036106295s] May 15 22:16:42.719: INFO: Created: latency-svc-tth6s May 15 22:16:42.746: INFO: Got endpoints: latency-svc-tth6s [1.003687422s] May 15 22:16:42.830: INFO: Created: latency-svc-vclkd May 15 22:16:42.854: INFO: Got endpoints: latency-svc-vclkd [973.135394ms] May 15 22:16:42.927: INFO: Created: latency-svc-62tr7 May 15 22:16:43.080: INFO: Got endpoints: latency-svc-62tr7 [1.142389309s] May 15 22:16:43.107: INFO: Created: latency-svc-tdswt May 15 22:16:43.220: INFO: Got endpoints: latency-svc-tdswt [1.215954544s] May 15 22:16:43.246: INFO: Created: latency-svc-7gz2z May 15 22:16:43.289: INFO: Got endpoints: latency-svc-7gz2z [1.256578894s] May 15 22:16:43.376: INFO: Created: latency-svc-296gw May 15 22:16:43.391: INFO: Got endpoints: latency-svc-296gw [1.316120659s] May 15 22:16:43.452: INFO: Created: latency-svc-xf49l May 15 22:16:43.463: INFO: Got endpoints: latency-svc-xf49l [1.309373134s] May 15 22:16:43.563: INFO: Created: latency-svc-86dxx May 15 22:16:43.608: INFO: Got endpoints: latency-svc-86dxx [1.403272866s] May 15 22:16:43.764: INFO: Created: latency-svc-9xvr9 May 15 22:16:43.767: INFO: Got endpoints: latency-svc-9xvr9 [1.481589353s] May 15 22:16:43.824: INFO: Created: latency-svc-rglsh May 15 22:16:43.854: INFO: Got endpoints: latency-svc-rglsh [1.544594617s] May 15 22:16:43.944: INFO: Created: latency-svc-7tw69 May 15 22:16:43.992: INFO: Got endpoints: latency-svc-7tw69 [1.629290405s] May 15 22:16:43.992: INFO: Created: latency-svc-7wpps May 15 22:16:44.010: INFO: Got endpoints: latency-svc-7wpps [1.572070826s] May 15 22:16:44.034: INFO: Created: latency-svc-klvr2 May 15 22:16:44.101: INFO: Got endpoints: latency-svc-klvr2 [1.613293797s] May 15 22:16:44.109: INFO: Created: latency-svc-b8vlm May 15 22:16:44.115: INFO: Got endpoints: latency-svc-b8vlm [1.464200201s] May 15 22:16:44.141: INFO: Created: latency-svc-frlz7 May 15 22:16:44.152: INFO: Got endpoints: latency-svc-frlz7 [1.484740682s] May 15 22:16:44.190: INFO: Created: latency-svc-ccv9b May 15 22:16:44.238: INFO: Got endpoints: latency-svc-ccv9b [1.491511722s] May 15 22:16:44.274: INFO: Created: latency-svc-rvpgt May 15 22:16:44.297: INFO: Got endpoints: latency-svc-rvpgt [1.442955792s] May 15 22:16:44.320: INFO: Created: latency-svc-7b4x8 May 15 22:16:44.333: INFO: Got endpoints: latency-svc-7b4x8 [1.253203094s] May 15 22:16:44.382: INFO: Created: latency-svc-9n6zt May 15 22:16:44.385: INFO: Got endpoints: latency-svc-9n6zt [1.165168511s] May 15 22:16:44.418: INFO: Created: latency-svc-88l88 May 15 22:16:44.448: INFO: Got endpoints: latency-svc-88l88 [1.158775931s] May 15 22:16:44.479: INFO: Created: latency-svc-5pmtv May 15 22:16:44.513: INFO: Got endpoints: latency-svc-5pmtv [1.122327183s] May 15 22:16:44.530: INFO: Created: latency-svc-pwh8c May 15 22:16:44.547: INFO: Got endpoints: latency-svc-pwh8c [1.0833007s] May 15 22:16:44.571: INFO: Created: latency-svc-xx9h9 May 15 22:16:44.589: INFO: Got endpoints: latency-svc-xx9h9 [981.113314ms] May 15 22:16:44.645: INFO: Created: latency-svc-p5cv5 May 15 22:16:44.648: INFO: Got endpoints: latency-svc-p5cv5 [881.106148ms] May 15 22:16:44.676: INFO: Created: latency-svc-5vbf6 May 15 22:16:44.715: INFO: Got endpoints: latency-svc-5vbf6 [861.370551ms] May 15 22:16:44.741: INFO: Created: latency-svc-q9nnm May 15 22:16:44.776: INFO: Got endpoints: latency-svc-q9nnm [784.124814ms] May 15 22:16:44.815: INFO: Created: latency-svc-fqzcz May 15 22:16:44.815: INFO: Got endpoints: latency-svc-fqzcz [804.410117ms] May 15 22:16:44.824: INFO: Created: latency-svc-55kjq May 15 22:16:44.842: INFO: Got endpoints: latency-svc-55kjq [741.241962ms] May 15 22:16:44.868: INFO: Created: latency-svc-28vqq May 15 22:16:44.944: INFO: Got endpoints: latency-svc-28vqq [828.904064ms] May 15 22:16:44.975: INFO: Created: latency-svc-mspn9 May 15 22:16:44.984: INFO: Got endpoints: latency-svc-mspn9 [831.210079ms] May 15 22:16:45.010: INFO: Created: latency-svc-dbxfk May 15 22:16:45.030: INFO: Got endpoints: latency-svc-dbxfk [792.175894ms] May 15 22:16:45.119: INFO: Created: latency-svc-2rrbs May 15 22:16:45.122: INFO: Got endpoints: latency-svc-2rrbs [825.200897ms] May 15 22:16:45.196: INFO: Created: latency-svc-bnsqm May 15 22:16:45.206: INFO: Got endpoints: latency-svc-bnsqm [872.959936ms] May 15 22:16:45.256: INFO: Created: latency-svc-jvt9l May 15 22:16:45.267: INFO: Got endpoints: latency-svc-jvt9l [881.781581ms] May 15 22:16:45.287: INFO: Created: latency-svc-5h4k7 May 15 22:16:45.302: INFO: Got endpoints: latency-svc-5h4k7 [854.577862ms] May 15 22:16:45.324: INFO: Created: latency-svc-cjblw May 15 22:16:45.339: INFO: Got endpoints: latency-svc-cjblw [825.609693ms] May 15 22:16:45.394: INFO: Created: latency-svc-87twt May 15 22:16:45.412: INFO: Got endpoints: latency-svc-87twt [864.813157ms] May 15 22:16:45.451: INFO: Created: latency-svc-gp49q May 15 22:16:45.492: INFO: Got endpoints: latency-svc-gp49q [902.720027ms] May 15 22:16:45.584: INFO: Created: latency-svc-ltswf May 15 22:16:45.607: INFO: Got endpoints: latency-svc-ltswf [958.57243ms] May 15 22:16:45.629: INFO: Created: latency-svc-44pjq May 15 22:16:45.642: INFO: Got endpoints: latency-svc-44pjq [927.015713ms] May 15 22:16:45.699: INFO: Created: latency-svc-4jwp7 May 15 22:16:45.703: INFO: Got endpoints: latency-svc-4jwp7 [926.220066ms] May 15 22:16:45.732: INFO: Created: latency-svc-rl5rd May 15 22:16:45.745: INFO: Got endpoints: latency-svc-rl5rd [930.200304ms] May 15 22:16:45.780: INFO: Created: latency-svc-m7mtk May 15 22:16:45.793: INFO: Got endpoints: latency-svc-m7mtk [951.338914ms] May 15 22:16:45.849: INFO: Created: latency-svc-ccwdd May 15 22:16:45.853: INFO: Got endpoints: latency-svc-ccwdd [908.711179ms] May 15 22:16:45.918: INFO: Created: latency-svc-lcjnr May 15 22:16:45.944: INFO: Got endpoints: latency-svc-lcjnr [960.022464ms] May 15 22:16:45.992: INFO: Created: latency-svc-5hj2n May 15 22:16:46.010: INFO: Got endpoints: latency-svc-5hj2n [980.173574ms] May 15 22:16:46.035: INFO: Created: latency-svc-q5fmd May 15 22:16:46.066: INFO: Got endpoints: latency-svc-q5fmd [943.07923ms] May 15 22:16:46.131: INFO: Created: latency-svc-lxwf5 May 15 22:16:46.151: INFO: Created: latency-svc-fsmjf May 15 22:16:46.152: INFO: Got endpoints: latency-svc-lxwf5 [945.751658ms] May 15 22:16:46.161: INFO: Got endpoints: latency-svc-fsmjf [893.854154ms] May 15 22:16:46.195: INFO: Created: latency-svc-28nmq May 15 22:16:46.203: INFO: Got endpoints: latency-svc-28nmq [900.178241ms] May 15 22:16:46.298: INFO: Created: latency-svc-jsrgd May 15 22:16:46.303: INFO: Got endpoints: latency-svc-jsrgd [963.846058ms] May 15 22:16:46.351: INFO: Created: latency-svc-9rq2b May 15 22:16:46.359: INFO: Got endpoints: latency-svc-9rq2b [947.475738ms] May 15 22:16:46.388: INFO: Created: latency-svc-np6dk May 15 22:16:46.429: INFO: Got endpoints: latency-svc-np6dk [937.79606ms] May 15 22:16:46.474: INFO: Created: latency-svc-7gzjp May 15 22:16:46.492: INFO: Got endpoints: latency-svc-7gzjp [884.946128ms] May 15 22:16:46.518: INFO: Created: latency-svc-w54mp May 15 22:16:46.586: INFO: Got endpoints: latency-svc-w54mp [943.364273ms] May 15 22:16:46.589: INFO: Created: latency-svc-qbkww May 15 22:16:46.594: INFO: Got endpoints: latency-svc-qbkww [891.588847ms] May 15 22:16:46.660: INFO: Created: latency-svc-4gr8b May 15 22:16:46.672: INFO: Got endpoints: latency-svc-4gr8b [927.491428ms] May 15 22:16:46.717: INFO: Created: latency-svc-2bxzb May 15 22:16:46.732: INFO: Got endpoints: latency-svc-2bxzb [938.216569ms] May 15 22:16:46.765: INFO: Created: latency-svc-6728m May 15 22:16:46.781: INFO: Got endpoints: latency-svc-6728m [928.155355ms] May 15 22:16:46.805: INFO: Created: latency-svc-ld5b5 May 15 22:16:46.855: INFO: Got endpoints: latency-svc-ld5b5 [911.014001ms] May 15 22:16:46.871: INFO: Created: latency-svc-pv68n May 15 22:16:46.885: INFO: Got endpoints: latency-svc-pv68n [874.538497ms] May 15 22:16:46.912: INFO: Created: latency-svc-xq9pt May 15 22:16:46.926: INFO: Got endpoints: latency-svc-xq9pt [860.711757ms] May 15 22:16:47.012: INFO: Created: latency-svc-ltkc4 May 15 22:16:47.034: INFO: Got endpoints: latency-svc-ltkc4 [882.037254ms] May 15 22:16:47.076: INFO: Created: latency-svc-gznlp May 15 22:16:47.107: INFO: Got endpoints: latency-svc-gznlp [945.894632ms] May 15 22:16:47.160: INFO: Created: latency-svc-6sdd2 May 15 22:16:47.179: INFO: Got endpoints: latency-svc-6sdd2 [975.979834ms] May 15 22:16:47.207: INFO: Created: latency-svc-7mwlt May 15 22:16:47.221: INFO: Got endpoints: latency-svc-7mwlt [918.059885ms] May 15 22:16:47.245: INFO: Created: latency-svc-bk5nf May 15 22:16:47.257: INFO: Got endpoints: latency-svc-bk5nf [897.947026ms] May 15 22:16:47.310: INFO: Created: latency-svc-58jm9 May 15 22:16:47.317: INFO: Got endpoints: latency-svc-58jm9 [887.577492ms] May 15 22:16:47.349: INFO: Created: latency-svc-bjczv May 15 22:16:47.366: INFO: Got endpoints: latency-svc-bjczv [874.14996ms] May 15 22:16:47.385: INFO: Created: latency-svc-9ckvg May 15 22:16:47.402: INFO: Got endpoints: latency-svc-9ckvg [816.119321ms] May 15 22:16:47.454: INFO: Created: latency-svc-vwbxn May 15 22:16:47.457: INFO: Got endpoints: latency-svc-vwbxn [862.922327ms] May 15 22:16:47.483: INFO: Created: latency-svc-tpthd May 15 22:16:47.505: INFO: Got endpoints: latency-svc-tpthd [832.074229ms] May 15 22:16:47.535: INFO: Created: latency-svc-mkxln May 15 22:16:47.597: INFO: Got endpoints: latency-svc-mkxln [865.692805ms] May 15 22:16:47.609: INFO: Created: latency-svc-p6nsq May 15 22:16:47.626: INFO: Got endpoints: latency-svc-p6nsq [844.103814ms] May 15 22:16:47.652: INFO: Created: latency-svc-z4hvj May 15 22:16:47.667: INFO: Got endpoints: latency-svc-z4hvj [812.385531ms] May 15 22:16:47.688: INFO: Created: latency-svc-hfk6j May 15 22:16:47.759: INFO: Got endpoints: latency-svc-hfk6j [874.876665ms] May 15 22:16:47.762: INFO: Created: latency-svc-4jk9m May 15 22:16:47.770: INFO: Got endpoints: latency-svc-4jk9m [843.513193ms] May 15 22:16:47.796: INFO: Created: latency-svc-5flkv May 15 22:16:47.812: INFO: Got endpoints: latency-svc-5flkv [777.868174ms] May 15 22:16:47.839: INFO: Created: latency-svc-z2cvz May 15 22:16:47.891: INFO: Got endpoints: latency-svc-z2cvz [784.294097ms] May 15 22:16:47.907: INFO: Created: latency-svc-57q2c May 15 22:16:47.933: INFO: Got endpoints: latency-svc-57q2c [754.118981ms] May 15 22:16:47.956: INFO: Created: latency-svc-gh6tn May 15 22:16:47.969: INFO: Got endpoints: latency-svc-gh6tn [747.729771ms] May 15 22:16:48.047: INFO: Created: latency-svc-jb65p May 15 22:16:48.049: INFO: Got endpoints: latency-svc-jb65p [791.421386ms] May 15 22:16:48.077: INFO: Created: latency-svc-972s8 May 15 22:16:48.089: INFO: Got endpoints: latency-svc-972s8 [772.276498ms] May 15 22:16:48.113: INFO: Created: latency-svc-25ndq May 15 22:16:48.125: INFO: Got endpoints: latency-svc-25ndq [759.144221ms] May 15 22:16:48.184: INFO: Created: latency-svc-vcmfb May 15 22:16:48.207: INFO: Created: latency-svc-kq2gr May 15 22:16:48.208: INFO: Got endpoints: latency-svc-vcmfb [805.583324ms] May 15 22:16:48.222: INFO: Got endpoints: latency-svc-kq2gr [764.94577ms] May 15 22:16:48.245: INFO: Created: latency-svc-xrdjq May 15 22:16:48.266: INFO: Got endpoints: latency-svc-xrdjq [761.540965ms] May 15 22:16:48.328: INFO: Created: latency-svc-5kn5d May 15 22:16:48.363: INFO: Got endpoints: latency-svc-5kn5d [765.542386ms] May 15 22:16:48.364: INFO: Created: latency-svc-7f27c May 15 22:16:48.407: INFO: Got endpoints: latency-svc-7f27c [781.446138ms] May 15 22:16:48.466: INFO: Created: latency-svc-b9n4d May 15 22:16:48.468: INFO: Got endpoints: latency-svc-b9n4d [801.11416ms] May 15 22:16:48.510: INFO: Created: latency-svc-lbdlj May 15 22:16:48.523: INFO: Got endpoints: latency-svc-lbdlj [116.332695ms] May 15 22:16:48.545: INFO: Created: latency-svc-n9j75 May 15 22:16:48.633: INFO: Got endpoints: latency-svc-n9j75 [873.73727ms] May 15 22:16:48.636: INFO: Created: latency-svc-hhvcn May 15 22:16:48.644: INFO: Got endpoints: latency-svc-hhvcn [873.73113ms] May 15 22:16:48.669: INFO: Created: latency-svc-z6w9l May 15 22:16:48.686: INFO: Got endpoints: latency-svc-z6w9l [874.242234ms] May 15 22:16:48.707: INFO: Created: latency-svc-vcpdv May 15 22:16:48.723: INFO: Got endpoints: latency-svc-vcpdv [831.635345ms] May 15 22:16:48.771: INFO: Created: latency-svc-7g8rm May 15 22:16:48.773: INFO: Got endpoints: latency-svc-7g8rm [840.351544ms] May 15 22:16:48.804: INFO: Created: latency-svc-bpsmr May 15 22:16:48.813: INFO: Got endpoints: latency-svc-bpsmr [843.949617ms] May 15 22:16:48.837: INFO: Created: latency-svc-wjwmn May 15 22:16:48.855: INFO: Got endpoints: latency-svc-wjwmn [806.776905ms] May 15 22:16:48.927: INFO: Created: latency-svc-qdz2f May 15 22:16:48.941: INFO: Got endpoints: latency-svc-qdz2f [851.257178ms] May 15 22:16:48.972: INFO: Created: latency-svc-jqwc2 May 15 22:16:48.989: INFO: Got endpoints: latency-svc-jqwc2 [863.033965ms] May 15 22:16:49.013: INFO: Created: latency-svc-ptndn May 15 22:16:49.024: INFO: Got endpoints: latency-svc-ptndn [816.619496ms] May 15 22:16:49.076: INFO: Created: latency-svc-kz4xl May 15 22:16:49.090: INFO: Got endpoints: latency-svc-kz4xl [867.979971ms] May 15 22:16:49.131: INFO: Created: latency-svc-l77f8 May 15 22:16:49.157: INFO: Got endpoints: latency-svc-l77f8 [891.123205ms] May 15 22:16:49.232: INFO: Created: latency-svc-xh9b7 May 15 22:16:49.257: INFO: Got endpoints: latency-svc-xh9b7 [894.270642ms] May 15 22:16:49.258: INFO: Created: latency-svc-lx8pl May 15 22:16:49.271: INFO: Got endpoints: latency-svc-lx8pl [802.425277ms] May 15 22:16:49.293: INFO: Created: latency-svc-csx84 May 15 22:16:49.314: INFO: Got endpoints: latency-svc-csx84 [790.188785ms] May 15 22:16:49.383: INFO: Created: latency-svc-8h8dz May 15 22:16:49.384: INFO: Got endpoints: latency-svc-8h8dz [750.928608ms] May 15 22:16:49.421: INFO: Created: latency-svc-65rqx May 15 22:16:49.434: INFO: Got endpoints: latency-svc-65rqx [790.266836ms] May 15 22:16:49.467: INFO: Created: latency-svc-b4ks2 May 15 22:16:49.537: INFO: Got endpoints: latency-svc-b4ks2 [851.247358ms] May 15 22:16:49.545: INFO: Created: latency-svc-4znhr May 15 22:16:49.595: INFO: Got endpoints: latency-svc-4znhr [872.414732ms] May 15 22:16:49.681: INFO: Created: latency-svc-95czq May 15 22:16:49.692: INFO: Got endpoints: latency-svc-95czq [918.860266ms] May 15 22:16:49.725: INFO: Created: latency-svc-m72qx May 15 22:16:49.741: INFO: Got endpoints: latency-svc-m72qx [928.14864ms] May 15 22:16:49.842: INFO: Created: latency-svc-25klq May 15 22:16:49.847: INFO: Got endpoints: latency-svc-25klq [992.016857ms] May 15 22:16:49.887: INFO: Created: latency-svc-wmmj9 May 15 22:16:49.901: INFO: Got endpoints: latency-svc-wmmj9 [960.589066ms] May 15 22:16:49.943: INFO: Created: latency-svc-vrvtn May 15 22:16:49.978: INFO: Got endpoints: latency-svc-vrvtn [989.202208ms] May 15 22:16:50.003: INFO: Created: latency-svc-pvmjm May 15 22:16:50.027: INFO: Got endpoints: latency-svc-pvmjm [1.003005065s] May 15 22:16:50.055: INFO: Created: latency-svc-k5djz May 15 22:16:50.073: INFO: Got endpoints: latency-svc-k5djz [983.111387ms] May 15 22:16:50.144: INFO: Created: latency-svc-dlk56 May 15 22:16:50.159: INFO: Got endpoints: latency-svc-dlk56 [1.001456574s] May 15 22:16:50.201: INFO: Created: latency-svc-tqmj7 May 15 22:16:50.220: INFO: Got endpoints: latency-svc-tqmj7 [962.46218ms] May 15 22:16:50.292: INFO: Created: latency-svc-6z4zh May 15 22:16:50.304: INFO: Got endpoints: latency-svc-6z4zh [1.033174805s] May 15 22:16:50.325: INFO: Created: latency-svc-8jnrd May 15 22:16:50.334: INFO: Got endpoints: latency-svc-8jnrd [1.020190539s] May 15 22:16:50.357: INFO: Created: latency-svc-4fdxv May 15 22:16:50.383: INFO: Got endpoints: latency-svc-4fdxv [998.974314ms] May 15 22:16:50.435: INFO: Created: latency-svc-2bz6z May 15 22:16:50.442: INFO: Got endpoints: latency-svc-2bz6z [1.008504637s] May 15 22:16:50.463: INFO: Created: latency-svc-tf7m4 May 15 22:16:50.479: INFO: Got endpoints: latency-svc-tf7m4 [941.615116ms] May 15 22:16:50.511: INFO: Created: latency-svc-w48jj May 15 22:16:50.573: INFO: Got endpoints: latency-svc-w48jj [978.047478ms] May 15 22:16:50.597: INFO: Created: latency-svc-phkxf May 15 22:16:50.611: INFO: Got endpoints: latency-svc-phkxf [919.112548ms] May 15 22:16:50.633: INFO: Created: latency-svc-mtc7r May 15 22:16:50.648: INFO: Got endpoints: latency-svc-mtc7r [906.53441ms] May 15 22:16:50.669: INFO: Created: latency-svc-bjnz7 May 15 22:16:50.711: INFO: Got endpoints: latency-svc-bjnz7 [863.294884ms] May 15 22:16:50.715: INFO: Created: latency-svc-94ngc May 15 22:16:50.745: INFO: Got endpoints: latency-svc-94ngc [843.505124ms] May 15 22:16:50.771: INFO: Created: latency-svc-7mkvw May 15 22:16:50.787: INFO: Got endpoints: latency-svc-7mkvw [808.717399ms] May 15 22:16:50.808: INFO: Created: latency-svc-ng55n May 15 22:16:50.849: INFO: Got endpoints: latency-svc-ng55n [821.76253ms] May 15 22:16:50.862: INFO: Created: latency-svc-6fcth May 15 22:16:50.878: INFO: Got endpoints: latency-svc-6fcth [804.162939ms] May 15 22:16:50.901: INFO: Created: latency-svc-7n9p4 May 15 22:16:50.913: INFO: Got endpoints: latency-svc-7n9p4 [754.598081ms] May 15 22:16:50.937: INFO: Created: latency-svc-j84cd May 15 22:16:51.016: INFO: Got endpoints: latency-svc-j84cd [796.001677ms] May 15 22:16:51.018: INFO: Created: latency-svc-p6kcg May 15 22:16:51.028: INFO: Got endpoints: latency-svc-p6kcg [723.841188ms] May 15 22:16:51.053: INFO: Created: latency-svc-sq7ls May 15 22:16:51.105: INFO: Got endpoints: latency-svc-sq7ls [771.441543ms] May 15 22:16:51.160: INFO: Created: latency-svc-wfnz4 May 15 22:16:51.178: INFO: Got endpoints: latency-svc-wfnz4 [795.125591ms] May 15 22:16:51.227: INFO: Created: latency-svc-rw78h May 15 22:16:51.298: INFO: Got endpoints: latency-svc-rw78h [855.235876ms] May 15 22:16:51.300: INFO: Created: latency-svc-nz6g4 May 15 22:16:51.311: INFO: Got endpoints: latency-svc-nz6g4 [831.439637ms] May 15 22:16:51.339: INFO: Created: latency-svc-hpt9f May 15 22:16:51.353: INFO: Got endpoints: latency-svc-hpt9f [779.719021ms] May 15 22:16:51.375: INFO: Created: latency-svc-65qt2 May 15 22:16:51.383: INFO: Got endpoints: latency-svc-65qt2 [771.779877ms] May 15 22:16:51.465: INFO: Created: latency-svc-mmkzz May 15 22:16:51.468: INFO: Got endpoints: latency-svc-mmkzz [820.454841ms] May 15 22:16:51.503: INFO: Created: latency-svc-qw9jw May 15 22:16:51.527: INFO: Got endpoints: latency-svc-qw9jw [816.277421ms] May 15 22:16:51.557: INFO: Created: latency-svc-rrnlz May 15 22:16:51.605: INFO: Got endpoints: latency-svc-rrnlz [859.622788ms] May 15 22:16:51.659: INFO: Created: latency-svc-kncvn May 15 22:16:51.672: INFO: Got endpoints: latency-svc-kncvn [885.402287ms] May 15 22:16:51.695: INFO: Created: latency-svc-crkcs May 15 22:16:51.741: INFO: Got endpoints: latency-svc-crkcs [891.425981ms] May 15 22:16:51.755: INFO: Created: latency-svc-hqvtl May 15 22:16:51.768: INFO: Got endpoints: latency-svc-hqvtl [890.705104ms] May 15 22:16:51.807: INFO: Created: latency-svc-blkbf May 15 22:16:51.829: INFO: Got endpoints: latency-svc-blkbf [915.625804ms] May 15 22:16:51.872: INFO: Created: latency-svc-bbbk7 May 15 22:16:51.875: INFO: Got endpoints: latency-svc-bbbk7 [858.971261ms] May 15 22:16:51.941: INFO: Created: latency-svc-7pwdk May 15 22:16:51.962: INFO: Got endpoints: latency-svc-7pwdk [933.504497ms] May 15 22:16:52.083: INFO: Created: latency-svc-v74hx May 15 22:16:52.106: INFO: Got endpoints: latency-svc-v74hx [1.000849121s] May 15 22:16:52.174: INFO: Created: latency-svc-2pqkj May 15 22:16:52.274: INFO: Got endpoints: latency-svc-2pqkj [1.095616388s] May 15 22:16:52.294: INFO: Created: latency-svc-rmvtr May 15 22:16:52.454: INFO: Got endpoints: latency-svc-rmvtr [1.156187794s] May 15 22:16:52.535: INFO: Created: latency-svc-7rccs May 15 22:16:52.627: INFO: Got endpoints: latency-svc-7rccs [1.316284714s] May 15 22:16:52.631: INFO: Created: latency-svc-gsltc May 15 22:16:52.716: INFO: Got endpoints: latency-svc-gsltc [1.362719607s] May 15 22:16:52.780: INFO: Created: latency-svc-rqj79 May 15 22:16:52.796: INFO: Got endpoints: latency-svc-rqj79 [1.412840772s] May 15 22:16:52.816: INFO: Created: latency-svc-m42pp May 15 22:16:52.826: INFO: Got endpoints: latency-svc-m42pp [1.357964725s] May 15 22:16:52.860: INFO: Created: latency-svc-pwqxd May 15 22:16:52.957: INFO: Got endpoints: latency-svc-pwqxd [1.429471786s] May 15 22:16:53.008: INFO: Created: latency-svc-z7k8c May 15 22:16:53.050: INFO: Got endpoints: latency-svc-z7k8c [1.445617181s] May 15 22:16:53.135: INFO: Created: latency-svc-tcczh May 15 22:16:53.169: INFO: Got endpoints: latency-svc-tcczh [1.49671629s] May 15 22:16:53.201: INFO: Created: latency-svc-5jcdt May 15 22:16:53.262: INFO: Got endpoints: latency-svc-5jcdt [1.521263666s] May 15 22:16:53.264: INFO: Created: latency-svc-gl72k May 15 22:16:53.278: INFO: Got endpoints: latency-svc-gl72k [1.509671119s] May 15 22:16:53.298: INFO: Created: latency-svc-kvzfz May 15 22:16:53.319: INFO: Got endpoints: latency-svc-kvzfz [1.490225022s] May 15 22:16:53.340: INFO: Created: latency-svc-z7wnc May 15 22:16:53.356: INFO: Got endpoints: latency-svc-z7wnc [1.481307685s] May 15 22:16:53.412: INFO: Created: latency-svc-4hbpz May 15 22:16:53.434: INFO: Got endpoints: latency-svc-4hbpz [1.472290532s] May 15 22:16:53.470: INFO: Created: latency-svc-s99xj May 15 22:16:53.508: INFO: Got endpoints: latency-svc-s99xj [1.401854868s] May 15 22:16:53.615: INFO: Created: latency-svc-8t8bj May 15 22:16:53.653: INFO: Got endpoints: latency-svc-8t8bj [1.378954569s] May 15 22:16:53.823: INFO: Created: latency-svc-cxttw May 15 22:16:53.824: INFO: Got endpoints: latency-svc-cxttw [1.369890298s] May 15 22:16:53.861: INFO: Created: latency-svc-8747f May 15 22:16:53.893: INFO: Got endpoints: latency-svc-8747f [1.266335214s] May 15 22:16:53.981: INFO: Created: latency-svc-vg2mk May 15 22:16:54.018: INFO: Got endpoints: latency-svc-vg2mk [1.3016616s] May 15 22:16:54.019: INFO: Created: latency-svc-5w7rh May 15 22:16:54.032: INFO: Got endpoints: latency-svc-5w7rh [1.236180802s] May 15 22:16:54.052: INFO: Created: latency-svc-pwx4r May 15 22:16:54.124: INFO: Got endpoints: latency-svc-pwx4r [1.29741872s] May 15 22:16:54.138: INFO: Created: latency-svc-n2nc9 May 15 22:16:54.152: INFO: Got endpoints: latency-svc-n2nc9 [1.195182926s] May 15 22:16:54.175: INFO: Created: latency-svc-kbhpf May 15 22:16:54.188: INFO: Got endpoints: latency-svc-kbhpf [1.137911969s] May 15 22:16:54.215: INFO: Created: latency-svc-7r4q7 May 15 22:16:54.273: INFO: Got endpoints: latency-svc-7r4q7 [1.104485267s] May 15 22:16:54.280: INFO: Created: latency-svc-7qd22 May 15 22:16:54.297: INFO: Got endpoints: latency-svc-7qd22 [1.035381115s] May 15 22:16:54.324: INFO: Created: latency-svc-skk5q May 15 22:16:54.333: INFO: Got endpoints: latency-svc-skk5q [1.055123041s] May 15 22:16:54.355: INFO: Created: latency-svc-hmh9z May 15 22:16:54.364: INFO: Got endpoints: latency-svc-hmh9z [1.044318852s] May 15 22:16:54.423: INFO: Created: latency-svc-tdbrh May 15 22:16:54.426: INFO: Got endpoints: latency-svc-tdbrh [1.069655826s] May 15 22:16:54.454: INFO: Created: latency-svc-jtbnf May 15 22:16:54.472: INFO: Got endpoints: latency-svc-jtbnf [1.038206533s] May 15 22:16:54.508: INFO: Created: latency-svc-pv8qt May 15 22:16:54.573: INFO: Got endpoints: latency-svc-pv8qt [1.064846538s] May 15 22:16:54.574: INFO: Created: latency-svc-s85qv May 15 22:16:54.581: INFO: Got endpoints: latency-svc-s85qv [927.326297ms] May 15 22:16:54.610: INFO: Created: latency-svc-fzjc8 May 15 22:16:54.623: INFO: Got endpoints: latency-svc-fzjc8 [799.483628ms] May 15 22:16:54.648: INFO: Created: latency-svc-f9xqx May 15 22:16:54.660: INFO: Got endpoints: latency-svc-f9xqx [766.524339ms] May 15 22:16:54.704: INFO: Created: latency-svc-p22lr May 15 22:16:54.720: INFO: Got endpoints: latency-svc-p22lr [702.465762ms] May 15 22:16:54.720: INFO: Latencies: [110.971332ms 116.332695ms 249.625386ms 305.740595ms 372.203388ms 400.832382ms 443.281102ms 522.423782ms 572.839966ms 654.060157ms 677.900189ms 702.465762ms 723.841188ms 731.223878ms 741.241962ms 747.729771ms 750.928608ms 754.118981ms 754.598081ms 759.144221ms 761.540965ms 764.94577ms 765.542386ms 766.524339ms 771.441543ms 771.779877ms 772.276498ms 777.868174ms 779.719021ms 781.446138ms 784.124814ms 784.294097ms 790.188785ms 790.266836ms 791.421386ms 792.175894ms 795.125591ms 796.001677ms 799.483628ms 801.11416ms 802.425277ms 804.162939ms 804.410117ms 805.583324ms 806.46412ms 806.776905ms 808.717399ms 812.385531ms 816.119321ms 816.277421ms 816.619496ms 820.454841ms 821.76253ms 825.200897ms 825.609693ms 828.904064ms 831.210079ms 831.439637ms 831.635345ms 832.074229ms 840.351544ms 843.505124ms 843.513193ms 843.949617ms 844.103814ms 851.247358ms 851.257178ms 854.577862ms 855.235876ms 855.767131ms 858.971261ms 859.622788ms 860.711757ms 861.370551ms 862.922327ms 863.033965ms 863.294884ms 864.813157ms 865.692805ms 867.979971ms 872.414732ms 872.959936ms 873.73113ms 873.73727ms 874.14996ms 874.242234ms 874.538497ms 874.876665ms 881.106148ms 881.781581ms 882.037254ms 884.946128ms 885.402287ms 887.577492ms 890.705104ms 891.123205ms 891.425981ms 891.588847ms 893.854154ms 894.270642ms 897.947026ms 900.178241ms 902.720027ms 906.53441ms 908.711179ms 911.014001ms 915.625804ms 918.059885ms 918.860266ms 919.112548ms 926.220066ms 927.015713ms 927.326297ms 927.491428ms 928.14864ms 928.155355ms 930.200304ms 933.504497ms 937.79606ms 938.216569ms 941.615116ms 943.07923ms 943.364273ms 945.751658ms 945.894632ms 947.475738ms 951.338914ms 958.57243ms 960.022464ms 960.589066ms 962.46218ms 963.846058ms 973.135394ms 975.979834ms 978.047478ms 980.173574ms 981.113314ms 983.111387ms 989.202208ms 992.016857ms 998.974314ms 1.000849121s 1.001456574s 1.003005065s 1.003687422s 1.008504637s 1.019770778s 1.020190539s 1.033174805s 1.035381115s 1.036106295s 1.038206533s 1.044318852s 1.055123041s 1.064846538s 1.069655826s 1.0833007s 1.095616388s 1.104485267s 1.122327183s 1.137911969s 1.142389309s 1.156187794s 1.158775931s 1.165168511s 1.195182926s 1.215954544s 1.236180802s 1.253203094s 1.256578894s 1.266335214s 1.29741872s 1.3016616s 1.309373134s 1.316120659s 1.316284714s 1.357964725s 1.362719607s 1.369890298s 1.378954569s 1.401854868s 1.403272866s 1.412840772s 1.429471786s 1.442955792s 1.445617181s 1.464200201s 1.472290532s 1.481307685s 1.481589353s 1.484740682s 1.490225022s 1.491511722s 1.49671629s 1.509671119s 1.521263666s 1.544594617s 1.572070826s 1.613293797s 1.629290405s] May 15 22:16:54.720: INFO: 50 %ile: 897.947026ms May 15 22:16:54.720: INFO: 90 %ile: 1.401854868s May 15 22:16:54.720: INFO: 99 %ile: 1.613293797s May 15 22:16:54.720: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:16:54.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-5832" for this suite. • [SLOW TEST:18.378 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":213,"skipped":3494,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:16:54.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:16:54.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-7032" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":214,"skipped":3495,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:16:54.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:17:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-77" for this suite. STEP: Destroying namespace "nsdeletetest-7145" for this suite. May 15 22:17:12.216: INFO: Namespace nsdeletetest-7145 was already deleted STEP: Destroying namespace "nsdeletetest-7547" for this suite. • [SLOW TEST:17.401 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":215,"skipped":3503,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:17:12.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9096 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 22:17:12.397: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 22:17:36.606: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.154:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9096 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:17:36.606: INFO: >>> kubeConfig: /root/.kube/config I0515 22:17:36.639545 6 log.go:172] (0xc000c2a2c0) (0xc000b22640) Create stream I0515 22:17:36.639570 6 log.go:172] (0xc000c2a2c0) (0xc000b22640) Stream added, broadcasting: 1 I0515 22:17:36.640848 6 log.go:172] (0xc000c2a2c0) Reply frame received for 1 I0515 22:17:36.640869 6 log.go:172] (0xc000c2a2c0) (0xc001458000) Create stream I0515 22:17:36.640876 6 log.go:172] (0xc000c2a2c0) (0xc001458000) Stream added, broadcasting: 3 I0515 22:17:36.641614 6 log.go:172] (0xc000c2a2c0) Reply frame received for 3 I0515 22:17:36.641646 6 log.go:172] (0xc000c2a2c0) (0xc000b22780) Create stream I0515 22:17:36.641655 6 log.go:172] (0xc000c2a2c0) (0xc000b22780) Stream added, broadcasting: 5 I0515 22:17:36.642187 6 log.go:172] (0xc000c2a2c0) Reply frame received for 5 I0515 22:17:36.751707 6 log.go:172] (0xc000c2a2c0) Data frame received for 5 I0515 22:17:36.751743 6 log.go:172] (0xc000b22780) (5) Data frame handling I0515 22:17:36.751764 6 log.go:172] (0xc000c2a2c0) Data frame received for 3 I0515 22:17:36.751776 6 log.go:172] (0xc001458000) (3) Data frame handling I0515 22:17:36.751787 6 log.go:172] (0xc001458000) (3) Data frame sent I0515 22:17:36.751799 6 log.go:172] (0xc000c2a2c0) Data frame received for 3 I0515 22:17:36.751810 6 log.go:172] (0xc001458000) (3) Data frame handling I0515 22:17:36.753860 6 log.go:172] (0xc000c2a2c0) Data frame received for 1 I0515 22:17:36.753882 6 log.go:172] (0xc000b22640) (1) Data frame handling I0515 22:17:36.753893 6 log.go:172] (0xc000b22640) (1) Data frame sent I0515 22:17:36.753905 6 log.go:172] (0xc000c2a2c0) (0xc000b22640) Stream removed, broadcasting: 1 I0515 22:17:36.753927 6 log.go:172] (0xc000c2a2c0) Go away received I0515 22:17:36.753980 6 log.go:172] (0xc000c2a2c0) (0xc000b22640) Stream removed, broadcasting: 1 I0515 22:17:36.753996 6 log.go:172] (0xc000c2a2c0) (0xc001458000) Stream removed, broadcasting: 3 I0515 22:17:36.754003 6 log.go:172] (0xc000c2a2c0) (0xc000b22780) Stream removed, broadcasting: 5 May 15 22:17:36.754: INFO: Found all expected endpoints: [netserver-0] May 15 22:17:36.922: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.226:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9096 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:17:36.922: INFO: >>> kubeConfig: /root/.kube/config I0515 22:17:36.952225 6 log.go:172] (0xc001385970) (0xc000cad5e0) Create stream I0515 22:17:36.952252 6 log.go:172] (0xc001385970) (0xc000cad5e0) Stream added, broadcasting: 1 I0515 22:17:36.954142 6 log.go:172] (0xc001385970) Reply frame received for 1 I0515 22:17:36.954181 6 log.go:172] (0xc001385970) (0xc001458280) Create stream I0515 22:17:36.954202 6 log.go:172] (0xc001385970) (0xc001458280) Stream added, broadcasting: 3 I0515 22:17:36.955207 6 log.go:172] (0xc001385970) Reply frame received for 3 I0515 22:17:36.955243 6 log.go:172] (0xc001385970) (0xc00052dd60) Create stream I0515 22:17:36.955257 6 log.go:172] (0xc001385970) (0xc00052dd60) Stream added, broadcasting: 5 I0515 22:17:36.956166 6 log.go:172] (0xc001385970) Reply frame received for 5 I0515 22:17:37.021483 6 log.go:172] (0xc001385970) Data frame received for 3 I0515 22:17:37.021518 6 log.go:172] (0xc001458280) (3) Data frame handling I0515 22:17:37.021547 6 log.go:172] (0xc001458280) (3) Data frame sent I0515 22:17:37.021573 6 log.go:172] (0xc001385970) Data frame received for 3 I0515 22:17:37.021660 6 log.go:172] (0xc001458280) (3) Data frame handling I0515 22:17:37.021689 6 log.go:172] (0xc001385970) Data frame received for 5 I0515 22:17:37.021707 6 log.go:172] (0xc00052dd60) (5) Data frame handling I0515 22:17:37.022581 6 log.go:172] (0xc001385970) Data frame received for 1 I0515 22:17:37.022597 6 log.go:172] (0xc000cad5e0) (1) Data frame handling I0515 22:17:37.022604 6 log.go:172] (0xc000cad5e0) (1) Data frame sent I0515 22:17:37.022656 6 log.go:172] (0xc001385970) (0xc000cad5e0) Stream removed, broadcasting: 1 I0515 22:17:37.022710 6 log.go:172] (0xc001385970) (0xc000cad5e0) Stream removed, broadcasting: 1 I0515 22:17:37.022719 6 log.go:172] (0xc001385970) (0xc001458280) Stream removed, broadcasting: 3 I0515 22:17:37.022723 6 log.go:172] (0xc001385970) (0xc00052dd60) Stream removed, broadcasting: 5 May 15 22:17:37.022: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:17:37.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0515 22:17:37.022931 6 log.go:172] (0xc001385970) Go away received STEP: Destroying namespace "pod-network-test-9096" for this suite. • [SLOW TEST:24.866 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3524,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:17:37.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:17:37.638: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:17:39.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177857, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725177857, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:17:42.684: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:17:43.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:17:44.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9859" for this suite. STEP: Destroying namespace "webhook-9859-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.570 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":217,"skipped":3528,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:17:44.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 15 22:17:44.799: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6340 /api/v1/namespaces/watch-6340/configmaps/e2e-watch-test-resource-version 2e39e518-0f34-4555-9a2e-caaec303771b 16486248 0 2020-05-15 22:17:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 15 22:17:44.799: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6340 /api/v1/namespaces/watch-6340/configmaps/e2e-watch-test-resource-version 2e39e518-0f34-4555-9a2e-caaec303771b 16486249 0 2020-05-15 22:17:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:17:44.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6340" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":218,"skipped":3536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:17:44.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 15 22:17:44.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1333' May 15 22:17:48.635: INFO: stderr: "" May 15 22:17:48.635: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 15 22:17:49.748: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:49.748: INFO: Found 0 / 1 May 15 22:17:50.649: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:50.649: INFO: Found 0 / 1 May 15 22:17:51.640: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:51.640: INFO: Found 0 / 1 May 15 22:17:52.640: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:52.640: INFO: Found 1 / 1 May 15 22:17:52.640: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 15 22:17:52.643: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:52.643: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 15 22:17:52.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-qzrgm --namespace=kubectl-1333 -p {"metadata":{"annotations":{"x":"y"}}}' May 15 22:17:52.746: INFO: stderr: "" May 15 22:17:52.746: INFO: stdout: "pod/agnhost-master-qzrgm patched\n" STEP: checking annotations May 15 22:17:52.760: INFO: Selector matched 1 pods for map[app:agnhost] May 15 22:17:52.760: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:17:52.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1333" for this suite. • [SLOW TEST:7.980 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":219,"skipped":3584,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:17:52.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b8dae4b7-4da8-498e-b3f6-c35827998885 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b8dae4b7-4da8-498e-b3f6-c35827998885 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:05.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3962" for this suite. • [SLOW TEST:72.483 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:23.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9279" for this suite. • [SLOW TEST:18.111 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":221,"skipped":3645,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:23.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 22:19:31.528: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 22:19:31.538: INFO: Pod pod-with-poststart-http-hook still exists May 15 22:19:33.538: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 22:19:33.542: INFO: Pod pod-with-poststart-http-hook still exists May 15 22:19:35.539: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 22:19:35.543: INFO: Pod pod-with-poststart-http-hook still exists May 15 22:19:37.538: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 15 22:19:37.543: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:37.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2782" for this suite. • [SLOW TEST:14.166 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3663,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:37.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 15 22:19:37.600: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4784" to be "success or failure" May 15 22:19:37.617: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.951906ms May 15 22:19:39.622: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021709377s May 15 22:19:41.642: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041414493s May 15 22:19:43.647: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047285529s STEP: Saw pod success May 15 22:19:43.647: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 15 22:19:43.650: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 15 22:19:43.787: INFO: Waiting for pod pod-host-path-test to disappear May 15 22:19:43.804: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:43.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4784" for this suite. • [SLOW TEST:6.259 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3673,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:43.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b075af80-49aa-42bd-8e5a-8c41927693b0 STEP: Creating a pod to test consume secrets May 15 22:19:43.900: INFO: Waiting up to 5m0s for pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1" in namespace "secrets-3717" to be "success or failure" May 15 22:19:44.129: INFO: Pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 228.863841ms May 15 22:19:46.132: INFO: Pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231543118s May 15 22:19:48.175: INFO: Pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274758371s May 15 22:19:50.179: INFO: Pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279091843s STEP: Saw pod success May 15 22:19:50.179: INFO: Pod "pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1" satisfied condition "success or failure" May 15 22:19:50.183: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1 container secret-volume-test: STEP: delete the pod May 15 22:19:50.200: INFO: Waiting for pod pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1 to disappear May 15 22:19:50.206: INFO: Pod pod-secrets-92e60904-a59c-44e1-9eb1-e1d012310ab1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:50.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3717" for this suite. • [SLOW TEST:6.402 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3674,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:50.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:19:50.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1" in namespace "projected-3176" to be "success or failure" May 15 22:19:50.368: INFO: Pod "downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.855141ms May 15 22:19:52.372: INFO: Pod "downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021967344s May 15 22:19:54.376: INFO: Pod "downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025802836s STEP: Saw pod success May 15 22:19:54.376: INFO: Pod "downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1" satisfied condition "success or failure" May 15 22:19:54.378: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1 container client-container: STEP: delete the pod May 15 22:19:54.582: INFO: Waiting for pod downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1 to disappear May 15 22:19:54.605: INFO: Pod downwardapi-volume-5dbf9feb-12f1-47cb-b476-03383e1b5cf1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:54.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3176" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:54.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 15 22:19:54.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 15 22:19:54.872: INFO: stderr: "" May 15 22:19:54.872: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:19:54.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4398" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":226,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:19:54.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:19:55.008: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af" in namespace "downward-api-2734" to be "success or failure" May 15 22:19:55.031: INFO: Pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af": Phase="Pending", Reason="", readiness=false. Elapsed: 22.911196ms May 15 22:19:57.035: INFO: Pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026774493s May 15 22:19:59.038: INFO: Pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af": Phase="Running", Reason="", readiness=true. Elapsed: 4.030496062s May 15 22:20:01.049: INFO: Pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040617362s STEP: Saw pod success May 15 22:20:01.049: INFO: Pod "downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af" satisfied condition "success or failure" May 15 22:20:01.051: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af container client-container: STEP: delete the pod May 15 22:20:01.110: INFO: Waiting for pod downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af to disappear May 15 22:20:01.115: INFO: Pod downwardapi-volume-a969b3cf-30fb-48fa-be90-fdcc0a4e50af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:01.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2734" for this suite. • [SLOW TEST:6.272 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3725,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:01.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-599 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-599 May 15 22:20:01.282: INFO: Found 0 stateful pods, waiting for 1 May 15 22:20:11.285: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 22:20:11.302: INFO: Deleting all statefulset in ns statefulset-599 May 15 22:20:11.308: INFO: Scaling statefulset ss to 0 May 15 22:20:21.412: INFO: Waiting for statefulset status.replicas updated to 0 May 15 22:20:21.415: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:21.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-599" for this suite. • [SLOW TEST:20.328 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":228,"skipped":3739,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:21.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 15 22:20:29.684: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:29.704: INFO: Pod pod-with-prestop-exec-hook still exists May 15 22:20:31.704: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:31.708: INFO: Pod pod-with-prestop-exec-hook still exists May 15 22:20:33.704: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:33.708: INFO: Pod pod-with-prestop-exec-hook still exists May 15 22:20:35.704: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:35.707: INFO: Pod pod-with-prestop-exec-hook still exists May 15 22:20:37.704: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:37.707: INFO: Pod pod-with-prestop-exec-hook still exists May 15 22:20:39.704: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 15 22:20:39.708: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:39.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2094" for this suite. • [SLOW TEST:18.244 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3755,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:39.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:20:39.823: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:43.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4033" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3762,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:44.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:20:44.075: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:49.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4539" for this suite. • [SLOW TEST:5.945 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":231,"skipped":3764,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:49.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-40a71bb4-8ca8-43c9-b6ca-5fcc69cf85dd STEP: Creating secret with name secret-projected-all-test-volume-12d006ab-ee02-46e3-a769-8561169c5461 STEP: Creating a pod to test Check all projections for projected volume plugin May 15 22:20:50.239: INFO: Waiting up to 5m0s for pod "projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc" in namespace "projected-4898" to be "success or failure" May 15 22:20:50.355: INFO: Pod "projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc": Phase="Pending", Reason="", readiness=false. Elapsed: 115.167478ms May 15 22:20:52.358: INFO: Pod "projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118895911s May 15 22:20:54.362: INFO: Pod "projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122808199s STEP: Saw pod success May 15 22:20:54.362: INFO: Pod "projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc" satisfied condition "success or failure" May 15 22:20:54.365: INFO: Trying to get logs from node jerma-worker pod projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc container projected-all-volume-test: STEP: delete the pod May 15 22:20:54.382: INFO: Waiting for pod projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc to disappear May 15 22:20:54.393: INFO: Pod projected-volume-f345f6c3-a27c-450f-9993-fe9992bcabdc no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:54.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4898" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3778,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:54.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-b948c88d-e79f-49fb-b9e1-2b5fdba9d3a1 STEP: Creating a pod to test consume secrets May 15 22:20:54.893: INFO: Waiting up to 5m0s for pod "pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7" in namespace "secrets-3470" to be "success or failure" May 15 22:20:54.902: INFO: Pod "pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.412055ms May 15 22:20:56.922: INFO: Pod "pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029357851s May 15 22:20:58.926: INFO: Pod "pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033896027s STEP: Saw pod success May 15 22:20:58.927: INFO: Pod "pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7" satisfied condition "success or failure" May 15 22:20:58.930: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7 container secret-volume-test: STEP: delete the pod May 15 22:20:58.978: INFO: Waiting for pod pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7 to disappear May 15 22:20:58.981: INFO: Pod pod-secrets-992daa3c-1c35-446f-8ca5-eb4b79c0daf7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:20:58.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3470" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:20:59.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:20:59.067: INFO: Creating deployment "test-recreate-deployment" May 15 22:20:59.070: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 15 22:20:59.160: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 15 22:21:01.242: INFO: Waiting deployment "test-recreate-deployment" to complete May 15 22:21:01.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178059, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178059, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178059, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178059, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 22:21:03.247: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 15 22:21:03.301: INFO: Updating deployment test-recreate-deployment May 15 22:21:03.301: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 15 22:21:03.964: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7042 /apis/apps/v1/namespaces/deployment-7042/deployments/test-recreate-deployment 878eadae-92d0-40b5-b2be-dcba0537dd86 16487458 2 2020-05-15 22:20:59 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027eed68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-15 22:21:03 +0000 UTC,LastTransitionTime:2020-05-15 22:21:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-15 22:21:03 +0000 UTC,LastTransitionTime:2020-05-15 22:20:59 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 15 22:21:04.015: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7042 /apis/apps/v1/namespaces/deployment-7042/replicasets/test-recreate-deployment-5f94c574ff 60806343-1676-4abf-96b6-34b059126759 16487455 1 2020-05-15 22:21:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 878eadae-92d0-40b5-b2be-dcba0537dd86 0xc0027ef417 0xc0027ef418}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ef478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 22:21:04.016: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 15 22:21:04.016: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7042 /apis/apps/v1/namespaces/deployment-7042/replicasets/test-recreate-deployment-799c574856 30d15ba7-5226-4ce8-86c1-30e1519d65f1 16487447 2 2020-05-15 22:20:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 878eadae-92d0-40b5-b2be-dcba0537dd86 0xc0027ef4e7 0xc0027ef4e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027ef558 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 15 22:21:04.021: INFO: Pod "test-recreate-deployment-5f94c574ff-vgdrd" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-vgdrd test-recreate-deployment-5f94c574ff- deployment-7042 /api/v1/namespaces/deployment-7042/pods/test-recreate-deployment-5f94c574ff-vgdrd dbdd4313-e956-4ab0-9158-ee004ceb5203 16487459 0 2020-05-15 22:21:03 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 60806343-1676-4abf-96b6-34b059126759 0xc0027efdc7 0xc0027efdc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2hjnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2hjnm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2hjnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 22:21:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 22:21:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 22:21:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-15 22:21:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-15 22:21:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:04.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7042" for this suite. • [SLOW TEST:5.014 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":234,"skipped":3829,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:04.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:07.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3660" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":235,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:07.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:21:08.045: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 15 22:21:09.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9373 create -f -' May 15 22:21:13.498: INFO: stderr: "" May 15 22:21:13.498: INFO: stdout: "e2e-test-crd-publish-openapi-4669-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 22:21:13.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9373 delete e2e-test-crd-publish-openapi-4669-crds test-cr' May 15 22:21:13.602: INFO: stderr: "" May 15 22:21:13.602: INFO: stdout: "e2e-test-crd-publish-openapi-4669-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 15 22:21:13.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9373 apply -f -' May 15 22:21:13.875: INFO: stderr: "" May 15 22:21:13.875: INFO: stdout: "e2e-test-crd-publish-openapi-4669-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 15 22:21:13.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9373 delete e2e-test-crd-publish-openapi-4669-crds test-cr' May 15 22:21:13.990: INFO: stderr: "" May 15 22:21:13.990: INFO: stdout: "e2e-test-crd-publish-openapi-4669-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 15 22:21:13.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4669-crds' May 15 22:21:14.220: INFO: stderr: "" May 15 22:21:14.220: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4669-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:17.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9373" for this suite. • [SLOW TEST:9.122 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":236,"skipped":3865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:17.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:21:17.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da" in namespace "projected-1918" to be "success or failure" May 15 22:21:17.228: INFO: Pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da": Phase="Pending", Reason="", readiness=false. Elapsed: 13.470876ms May 15 22:21:19.232: INFO: Pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017811663s May 15 22:21:21.236: INFO: Pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da": Phase="Running", Reason="", readiness=true. Elapsed: 4.021967166s May 15 22:21:23.240: INFO: Pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026101135s STEP: Saw pod success May 15 22:21:23.240: INFO: Pod "downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da" satisfied condition "success or failure" May 15 22:21:23.243: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da container client-container: STEP: delete the pod May 15 22:21:23.261: INFO: Waiting for pod downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da to disappear May 15 22:21:23.313: INFO: Pod downwardapi-volume-a71b6670-2f15-4ae1-ba85-d825ebfb64da no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:23.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1918" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:23.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:23.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-1931" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":238,"skipped":3925,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:23.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:21:24.250: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:21:26.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178084, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178084, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178084, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178084, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:21:29.326: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:29.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4999" for this suite. STEP: Destroying namespace "webhook-4999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.093 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":239,"skipped":3938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:29.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 15 22:21:29.543: INFO: Waiting up to 5m0s for pod "pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68" in namespace "emptydir-2446" to be "success or failure" May 15 22:21:29.572: INFO: Pod "pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68": Phase="Pending", Reason="", readiness=false. Elapsed: 28.778884ms May 15 22:21:31.576: INFO: Pod "pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033241176s May 15 22:21:33.581: INFO: Pod "pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038320211s STEP: Saw pod success May 15 22:21:33.581: INFO: Pod "pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68" satisfied condition "success or failure" May 15 22:21:33.585: INFO: Trying to get logs from node jerma-worker pod pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68 container test-container: STEP: delete the pod May 15 22:21:33.794: INFO: Waiting for pod pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68 to disappear May 15 22:21:33.820: INFO: Pod pod-a697c1b5-f0b0-4aaa-9568-2979cac1ef68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:33.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2446" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:33.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:49.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2354" for this suite. • [SLOW TEST:16.170 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":241,"skipped":4008,"failed":0} SSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:49.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 15 22:21:54.654: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4608 pod-service-account-7fe94f24-013c-4f06-a108-2c9ecc8d1bf8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 15 22:21:54.959: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4608 pod-service-account-7fe94f24-013c-4f06-a108-2c9ecc8d1bf8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 15 22:21:55.167: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4608 pod-service-account-7fe94f24-013c-4f06-a108-2c9ecc8d1bf8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:55.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4608" for this suite. • [SLOW TEST:5.379 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":242,"skipped":4014,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:55.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:21:55.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 15 22:21:55.663: INFO: stderr: "" May 15 22:21:55.663: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:21:55.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2310" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":243,"skipped":4015,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:21:55.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:21:55.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26" in namespace "downward-api-2503" to be "success or failure" May 15 22:21:55.770: INFO: Pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26": Phase="Pending", Reason="", readiness=false. Elapsed: 11.440934ms May 15 22:21:57.774: INFO: Pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015444681s May 15 22:21:59.777: INFO: Pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26": Phase="Running", Reason="", readiness=true. Elapsed: 4.018778101s May 15 22:22:01.782: INFO: Pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02300021s STEP: Saw pod success May 15 22:22:01.782: INFO: Pod "downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26" satisfied condition "success or failure" May 15 22:22:01.785: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26 container client-container: STEP: delete the pod May 15 22:22:01.804: INFO: Waiting for pod downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26 to disappear May 15 22:22:01.847: INFO: Pod downwardapi-volume-b3dc54f5-1955-4f08-8421-8d0e3ab8cf26 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2503" for this suite. • [SLOW TEST:6.183 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4027,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:01.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:22:01.919: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857" in namespace "downward-api-3770" to be "success or failure" May 15 22:22:01.929: INFO: Pod "downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857": Phase="Pending", Reason="", readiness=false. Elapsed: 9.96635ms May 15 22:22:03.932: INFO: Pod "downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013700203s May 15 22:22:05.967: INFO: Pod "downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048399552s STEP: Saw pod success May 15 22:22:05.967: INFO: Pod "downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857" satisfied condition "success or failure" May 15 22:22:05.970: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857 container client-container: STEP: delete the pod May 15 22:22:05.995: INFO: Waiting for pod downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857 to disappear May 15 22:22:06.042: INFO: Pod downwardapi-volume-3d0b4292-d0da-480f-86d0-2b7626e66857 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3770" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4091,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:06.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:06.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8991" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":246,"skipped":4107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:06.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:22:06.637: INFO: Waiting up to 5m0s for pod "downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e" in namespace "projected-2828" to be "success or failure" May 15 22:22:06.715: INFO: Pod "downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 77.966114ms May 15 22:22:08.720: INFO: Pod "downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082264144s May 15 22:22:10.724: INFO: Pod "downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086635137s STEP: Saw pod success May 15 22:22:10.724: INFO: Pod "downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e" satisfied condition "success or failure" May 15 22:22:10.727: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e container client-container: STEP: delete the pod May 15 22:22:10.949: INFO: Waiting for pod downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e to disappear May 15 22:22:10.952: INFO: Pod downwardapi-volume-755a728b-9522-4ffd-8795-dbb284fd9c6e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:10.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2828" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4131,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:10.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4261.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 15 22:22:17.045: INFO: DNS probes using dns-4261/dns-test-2f8f992b-6319-4532-abcb-d12ca874c850 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:17.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4261" for this suite. • [SLOW TEST:6.183 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":248,"skipped":4133,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:17.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 15 22:22:22.094: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b89a0332-ae92-41af-a441-784df5eda637" May 15 22:22:22.094: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b89a0332-ae92-41af-a441-784df5eda637" in namespace "pods-6827" to be "terminated due to deadline exceeded" May 15 22:22:22.097: INFO: Pod "pod-update-activedeadlineseconds-b89a0332-ae92-41af-a441-784df5eda637": Phase="Running", Reason="", readiness=true. Elapsed: 3.0534ms May 15 22:22:24.101: INFO: Pod "pod-update-activedeadlineseconds-b89a0332-ae92-41af-a441-784df5eda637": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007097723s May 15 22:22:24.101: INFO: Pod "pod-update-activedeadlineseconds-b89a0332-ae92-41af-a441-784df5eda637" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:24.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6827" for this suite. • [SLOW TEST:6.968 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:24.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-m2n2 STEP: Creating a pod to test atomic-volume-subpath May 15 22:22:24.492: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m2n2" in namespace "subpath-3289" to be "success or failure" May 15 22:22:24.534: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Pending", Reason="", readiness=false. Elapsed: 41.667837ms May 15 22:22:26.570: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078009014s May 15 22:22:28.574: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 4.081843956s May 15 22:22:30.578: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 6.085569815s May 15 22:22:32.582: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 8.089817779s May 15 22:22:34.587: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 10.094020481s May 15 22:22:36.591: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 12.098806893s May 15 22:22:38.596: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 14.103291s May 15 22:22:40.600: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 16.107090228s May 15 22:22:42.626: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 18.133674385s May 15 22:22:44.638: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 20.145799522s May 15 22:22:46.643: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Running", Reason="", readiness=true. Elapsed: 22.150296s May 15 22:22:48.647: INFO: Pod "pod-subpath-test-configmap-m2n2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.154857732s STEP: Saw pod success May 15 22:22:48.647: INFO: Pod "pod-subpath-test-configmap-m2n2" satisfied condition "success or failure" May 15 22:22:48.650: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-m2n2 container test-container-subpath-configmap-m2n2: STEP: delete the pod May 15 22:22:48.670: INFO: Waiting for pod pod-subpath-test-configmap-m2n2 to disappear May 15 22:22:48.681: INFO: Pod pod-subpath-test-configmap-m2n2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-m2n2 May 15 22:22:48.681: INFO: Deleting pod "pod-subpath-test-configmap-m2n2" in namespace "subpath-3289" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:48.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3289" for this suite. • [SLOW TEST:24.620 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":250,"skipped":4177,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:48.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5578/secret-test-ec749331-4632-4e2f-929a-b2b5a857c3fc STEP: Creating a pod to test consume secrets May 15 22:22:48.791: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380" in namespace "secrets-5578" to be "success or failure" May 15 22:22:48.795: INFO: Pod "pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632389ms May 15 22:22:50.799: INFO: Pod "pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673041s May 15 22:22:52.807: INFO: Pod "pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0159511s STEP: Saw pod success May 15 22:22:52.807: INFO: Pod "pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380" satisfied condition "success or failure" May 15 22:22:52.810: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380 container env-test: STEP: delete the pod May 15 22:22:53.221: INFO: Waiting for pod pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380 to disappear May 15 22:22:53.223: INFO: Pod pod-configmaps-fe564a3d-4984-449f-8454-39a12273f380 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:53.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5578" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4178,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:53.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 15 22:22:53.301: INFO: Waiting up to 5m0s for pod "downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b" in namespace "downward-api-8379" to be "success or failure" May 15 22:22:53.305: INFO: Pod "downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.581919ms May 15 22:22:55.307: INFO: Pod "downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006487258s May 15 22:22:57.321: INFO: Pod "downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020375159s STEP: Saw pod success May 15 22:22:57.321: INFO: Pod "downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b" satisfied condition "success or failure" May 15 22:22:57.325: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b container dapi-container: STEP: delete the pod May 15 22:22:57.472: INFO: Waiting for pod downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b to disappear May 15 22:22:57.530: INFO: Pod downward-api-1ab3df25-9afe-4523-a7c0-1ca4e130713b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:22:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8379" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4200,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:22:57.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 15 22:23:03.867: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4136 PodName:pod-sharedvolume-23a21e01-5265-40e3-aac9-c79910574881 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:23:03.867: INFO: >>> kubeConfig: /root/.kube/config I0515 22:23:03.901367 6 log.go:172] (0xc0011a0f20) (0xc002951c20) Create stream I0515 22:23:03.901409 6 log.go:172] (0xc0011a0f20) (0xc002951c20) Stream added, broadcasting: 1 I0515 22:23:03.903236 6 log.go:172] (0xc0011a0f20) Reply frame received for 1 I0515 22:23:03.903277 6 log.go:172] (0xc0011a0f20) (0xc00276cfa0) Create stream I0515 22:23:03.903294 6 log.go:172] (0xc0011a0f20) (0xc00276cfa0) Stream added, broadcasting: 3 I0515 22:23:03.904288 6 log.go:172] (0xc0011a0f20) Reply frame received for 3 I0515 22:23:03.904334 6 log.go:172] (0xc0011a0f20) (0xc00276d040) Create stream I0515 22:23:03.904343 6 log.go:172] (0xc0011a0f20) (0xc00276d040) Stream added, broadcasting: 5 I0515 22:23:03.905660 6 log.go:172] (0xc0011a0f20) Reply frame received for 5 I0515 22:23:03.993905 6 log.go:172] (0xc0011a0f20) Data frame received for 3 I0515 22:23:03.993973 6 log.go:172] (0xc00276cfa0) (3) Data frame handling I0515 22:23:03.994012 6 log.go:172] (0xc00276cfa0) (3) Data frame sent I0515 22:23:03.994031 6 log.go:172] (0xc0011a0f20) Data frame received for 3 I0515 22:23:03.994046 6 log.go:172] (0xc00276cfa0) (3) Data frame handling I0515 22:23:03.994185 6 log.go:172] (0xc0011a0f20) Data frame received for 5 I0515 22:23:03.994213 6 log.go:172] (0xc00276d040) (5) Data frame handling I0515 22:23:03.995868 6 log.go:172] (0xc0011a0f20) Data frame received for 1 I0515 22:23:03.995884 6 log.go:172] (0xc002951c20) (1) Data frame handling I0515 22:23:03.995903 6 log.go:172] (0xc002951c20) (1) Data frame sent I0515 22:23:03.995917 6 log.go:172] (0xc0011a0f20) (0xc002951c20) Stream removed, broadcasting: 1 I0515 22:23:03.995970 6 log.go:172] (0xc0011a0f20) Go away received I0515 22:23:03.996000 6 log.go:172] (0xc0011a0f20) (0xc002951c20) Stream removed, broadcasting: 1 I0515 22:23:03.996015 6 log.go:172] (0xc0011a0f20) (0xc00276cfa0) Stream removed, broadcasting: 3 I0515 22:23:03.996029 6 log.go:172] (0xc0011a0f20) (0xc00276d040) Stream removed, broadcasting: 5 May 15 22:23:03.996: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:23:03.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4136" for this suite. • [SLOW TEST:6.469 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":253,"skipped":4212,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:23:04.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:23:04.560: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:23:06.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178184, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178184, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178184, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178184, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:23:09.639: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:23:19.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8749" for this suite. STEP: Destroying namespace "webhook-8749-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.897 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":254,"skipped":4221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:23:19.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 15 22:23:19.946: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 22:23:19.980: INFO: Waiting for terminating namespaces to be deleted... May 15 22:23:19.982: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 15 22:23:20.043: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 22:23:20.044: INFO: Container kindnet-cni ready: true, restart count 0 May 15 22:23:20.044: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 22:23:20.044: INFO: Container kube-proxy ready: true, restart count 0 May 15 22:23:20.044: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 15 22:23:20.048: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 22:23:20.048: INFO: Container kindnet-cni ready: true, restart count 0 May 15 22:23:20.048: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 15 22:23:20.048: INFO: Container kube-bench ready: false, restart count 0 May 15 22:23:20.048: INFO: sample-webhook-deployment-5f65f8c764-7w9v7 from webhook-8749 started at 2020-05-15 22:23:04 +0000 UTC (1 container statuses recorded) May 15 22:23:20.048: INFO: Container sample-webhook ready: true, restart count 0 May 15 22:23:20.048: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 15 22:23:20.048: INFO: Container kube-proxy ready: true, restart count 0 May 15 22:23:20.048: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 15 22:23:20.048: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3ab620d3-0486-4a0b-a3bb-ffac828c6348 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-3ab620d3-0486-4a0b-a3bb-ffac828c6348 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-3ab620d3-0486-4a0b-a3bb-ffac828c6348 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:23:36.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3653" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.504 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":255,"skipped":4270,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:23:36.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:23:36.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516" in namespace "downward-api-4509" to be "success or failure" May 15 22:23:36.479: INFO: Pod "downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516": Phase="Pending", Reason="", readiness=false. Elapsed: 14.631302ms May 15 22:23:38.484: INFO: Pod "downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019575623s May 15 22:23:40.488: INFO: Pod "downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02405101s STEP: Saw pod success May 15 22:23:40.488: INFO: Pod "downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516" satisfied condition "success or failure" May 15 22:23:40.491: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516 container client-container: STEP: delete the pod May 15 22:23:40.516: INFO: Waiting for pod downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516 to disappear May 15 22:23:40.521: INFO: Pod downwardapi-volume-86f131a5-4045-4594-b338-fe30703a1516 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:23:40.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4509" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4283,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:23:40.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-976 STEP: creating a selector STEP: Creating the service pods in kubernetes May 15 22:23:40.590: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 15 22:24:06.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:8080/dial?request=hostname&protocol=http&host=10.244.1.181&port=8080&tries=1'] Namespace:pod-network-test-976 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:24:06.748: INFO: >>> kubeConfig: /root/.kube/config I0515 22:24:06.797780 6 log.go:172] (0xc002cde000) (0xc00058b9a0) Create stream I0515 22:24:06.797806 6 log.go:172] (0xc002cde000) (0xc00058b9a0) Stream added, broadcasting: 1 I0515 22:24:06.799416 6 log.go:172] (0xc002cde000) Reply frame received for 1 I0515 22:24:06.799458 6 log.go:172] (0xc002cde000) (0xc001459900) Create stream I0515 22:24:06.799472 6 log.go:172] (0xc002cde000) (0xc001459900) Stream added, broadcasting: 3 I0515 22:24:06.800412 6 log.go:172] (0xc002cde000) Reply frame received for 3 I0515 22:24:06.800437 6 log.go:172] (0xc002cde000) (0xc001459a40) Create stream I0515 22:24:06.800443 6 log.go:172] (0xc002cde000) (0xc001459a40) Stream added, broadcasting: 5 I0515 22:24:06.801795 6 log.go:172] (0xc002cde000) Reply frame received for 5 I0515 22:24:06.872190 6 log.go:172] (0xc002cde000) Data frame received for 3 I0515 22:24:06.872230 6 log.go:172] (0xc001459900) (3) Data frame handling I0515 22:24:06.872265 6 log.go:172] (0xc001459900) (3) Data frame sent I0515 22:24:06.872591 6 log.go:172] (0xc002cde000) Data frame received for 3 I0515 22:24:06.872610 6 log.go:172] (0xc001459900) (3) Data frame handling I0515 22:24:06.872696 6 log.go:172] (0xc002cde000) Data frame received for 5 I0515 22:24:06.872713 6 log.go:172] (0xc001459a40) (5) Data frame handling I0515 22:24:06.874536 6 log.go:172] (0xc002cde000) Data frame received for 1 I0515 22:24:06.874570 6 log.go:172] (0xc00058b9a0) (1) Data frame handling I0515 22:24:06.874598 6 log.go:172] (0xc00058b9a0) (1) Data frame sent I0515 22:24:06.874622 6 log.go:172] (0xc002cde000) (0xc00058b9a0) Stream removed, broadcasting: 1 I0515 22:24:06.874645 6 log.go:172] (0xc002cde000) Go away received I0515 22:24:06.874921 6 log.go:172] (0xc002cde000) (0xc00058b9a0) Stream removed, broadcasting: 1 I0515 22:24:06.874953 6 log.go:172] (0xc002cde000) (0xc001459900) Stream removed, broadcasting: 3 I0515 22:24:06.874981 6 log.go:172] (0xc002cde000) (0xc001459a40) Stream removed, broadcasting: 5 May 15 22:24:06.875: INFO: Waiting for responses: map[] May 15 22:24:06.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.246:8080/dial?request=hostname&protocol=http&host=10.244.2.245&port=8080&tries=1'] Namespace:pod-network-test-976 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 15 22:24:06.891: INFO: >>> kubeConfig: /root/.kube/config I0515 22:24:06.920797 6 log.go:172] (0xc002cde6e0) (0xc000ffe460) Create stream I0515 22:24:06.920822 6 log.go:172] (0xc002cde6e0) (0xc000ffe460) Stream added, broadcasting: 1 I0515 22:24:06.925939 6 log.go:172] (0xc002cde6e0) Reply frame received for 1 I0515 22:24:06.925991 6 log.go:172] (0xc002cde6e0) (0xc001a78140) Create stream I0515 22:24:06.926006 6 log.go:172] (0xc002cde6e0) (0xc001a78140) Stream added, broadcasting: 3 I0515 22:24:06.927716 6 log.go:172] (0xc002cde6e0) Reply frame received for 3 I0515 22:24:06.927740 6 log.go:172] (0xc002cde6e0) (0xc001a78640) Create stream I0515 22:24:06.927757 6 log.go:172] (0xc002cde6e0) (0xc001a78640) Stream added, broadcasting: 5 I0515 22:24:06.931446 6 log.go:172] (0xc002cde6e0) Reply frame received for 5 I0515 22:24:06.986320 6 log.go:172] (0xc002cde6e0) Data frame received for 3 I0515 22:24:06.986340 6 log.go:172] (0xc001a78140) (3) Data frame handling I0515 22:24:06.986352 6 log.go:172] (0xc001a78140) (3) Data frame sent I0515 22:24:06.986996 6 log.go:172] (0xc002cde6e0) Data frame received for 5 I0515 22:24:06.987024 6 log.go:172] (0xc001a78640) (5) Data frame handling I0515 22:24:06.987045 6 log.go:172] (0xc002cde6e0) Data frame received for 3 I0515 22:24:06.987056 6 log.go:172] (0xc001a78140) (3) Data frame handling I0515 22:24:06.988355 6 log.go:172] (0xc002cde6e0) Data frame received for 1 I0515 22:24:06.988400 6 log.go:172] (0xc000ffe460) (1) Data frame handling I0515 22:24:06.988430 6 log.go:172] (0xc000ffe460) (1) Data frame sent I0515 22:24:06.988441 6 log.go:172] (0xc002cde6e0) (0xc000ffe460) Stream removed, broadcasting: 1 I0515 22:24:06.988455 6 log.go:172] (0xc002cde6e0) Go away received I0515 22:24:06.988612 6 log.go:172] (0xc002cde6e0) (0xc000ffe460) Stream removed, broadcasting: 1 I0515 22:24:06.988636 6 log.go:172] (0xc002cde6e0) (0xc001a78140) Stream removed, broadcasting: 3 I0515 22:24:06.988652 6 log.go:172] (0xc002cde6e0) (0xc001a78640) Stream removed, broadcasting: 5 May 15 22:24:06.988: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:06.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-976" for this suite. • [SLOW TEST:26.466 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:06.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 15 22:24:07.068: INFO: Waiting up to 5m0s for pod "downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2" in namespace "downward-api-2400" to be "success or failure" May 15 22:24:07.086: INFO: Pod "downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.761597ms May 15 22:24:09.090: INFO: Pod "downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021923647s May 15 22:24:11.095: INFO: Pod "downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02647872s STEP: Saw pod success May 15 22:24:11.095: INFO: Pod "downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2" satisfied condition "success or failure" May 15 22:24:11.097: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2 container dapi-container: STEP: delete the pod May 15 22:24:11.227: INFO: Waiting for pod downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2 to disappear May 15 22:24:11.271: INFO: Pod downward-api-f7648f65-b18b-4b1d-94ba-0dab182f49f2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:11.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2400" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:11.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 15 22:24:17.573: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 15 22:24:32.665: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:32.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2136" for this suite. • [SLOW TEST:21.273 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":259,"skipped":4350,"failed":0} SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:32.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 15 22:24:32.744: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:39.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2382" for this suite. • [SLOW TEST:6.430 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":260,"skipped":4355,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:39.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 22:24:39.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7835' May 15 22:24:39.252: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 15 22:24:39.252: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 15 22:24:41.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7835' May 15 22:24:41.675: INFO: stderr: "" May 15 22:24:41.675: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:41.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7835" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":261,"skipped":4363,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:41.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:24:42.801: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:24:44.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178282, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178282, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178282, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:24:47.940: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:48.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5663" for this suite. STEP: Destroying namespace "webhook-5663-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.552 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":262,"skipped":4375,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:48.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 15 22:24:48.348: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:24:59.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-165" for this suite. • [SLOW TEST:11.019 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4389,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:24:59.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-a8caebcc-0841-4083-9bf2-7b8897529616 STEP: Creating a pod to test consume configMaps May 15 22:24:59.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32" in namespace "projected-1519" to be "success or failure" May 15 22:24:59.366: INFO: Pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229307ms May 15 22:25:01.370: INFO: Pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014539273s May 15 22:25:03.374: INFO: Pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32": Phase="Running", Reason="", readiness=true. Elapsed: 4.018201999s May 15 22:25:05.378: INFO: Pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0216373s STEP: Saw pod success May 15 22:25:05.378: INFO: Pod "pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32" satisfied condition "success or failure" May 15 22:25:05.380: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32 container projected-configmap-volume-test: STEP: delete the pod May 15 22:25:05.428: INFO: Waiting for pod pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32 to disappear May 15 22:25:05.544: INFO: Pod pod-projected-configmaps-3e0c3aab-9600-4a72-a6ee-840047d1cd32 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:25:05.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1519" for this suite. • [SLOW TEST:6.298 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4389,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:25:05.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c May 15 22:25:05.682: INFO: Pod name my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c: Found 0 pods out of 1 May 15 22:25:10.685: INFO: Pod name my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c: Found 1 pods out of 1 May 15 22:25:10.685: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c" are running May 15 22:25:10.691: INFO: Pod "my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c-5vnps" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 22:25:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 22:25:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 22:25:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-15 22:25:05 +0000 UTC Reason: Message:}]) May 15 22:25:10.691: INFO: Trying to dial the pod May 15 22:25:15.703: INFO: Controller my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c: Got expected result from replica 1 [my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c-5vnps]: "my-hostname-basic-163c30db-aecb-4e7d-a9c9-664f601e1d6c-5vnps", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:25:15.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4724" for this suite. • [SLOW TEST:10.159 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":265,"skipped":4391,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:25:15.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8000 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 15 22:25:15.798: INFO: Found 0 stateful pods, waiting for 3 May 15 22:25:25.804: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 22:25:25.804: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 22:25:25.804: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 15 22:25:35.804: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 15 22:25:35.804: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 15 22:25:35.804: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 15 22:25:35.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8000 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 22:25:36.059: INFO: stderr: "I0515 22:25:35.950344 3955 log.go:172] (0xc0000f5600) (0xc0006a7ae0) Create stream\nI0515 22:25:35.950406 3955 log.go:172] (0xc0000f5600) (0xc0006a7ae0) Stream added, broadcasting: 1\nI0515 22:25:35.952994 3955 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0515 22:25:35.953056 3955 log.go:172] (0xc0000f5600) (0xc000a62000) Create stream\nI0515 22:25:35.953073 3955 log.go:172] (0xc0000f5600) (0xc000a62000) Stream added, broadcasting: 3\nI0515 22:25:35.954264 3955 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0515 22:25:35.954296 3955 log.go:172] (0xc0000f5600) (0xc0006a7cc0) Create stream\nI0515 22:25:35.954305 3955 log.go:172] (0xc0000f5600) (0xc0006a7cc0) Stream added, broadcasting: 5\nI0515 22:25:35.955299 3955 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0515 22:25:36.021709 3955 log.go:172] (0xc0000f5600) Data frame received for 5\nI0515 22:25:36.021748 3955 log.go:172] (0xc0006a7cc0) (5) Data frame handling\nI0515 22:25:36.021766 3955 log.go:172] (0xc0006a7cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 22:25:36.051592 3955 log.go:172] (0xc0000f5600) Data frame received for 5\nI0515 22:25:36.051638 3955 log.go:172] (0xc0006a7cc0) (5) Data frame handling\nI0515 22:25:36.051670 3955 log.go:172] (0xc0000f5600) Data frame received for 3\nI0515 22:25:36.051759 3955 log.go:172] (0xc000a62000) (3) Data frame handling\nI0515 22:25:36.051809 3955 log.go:172] (0xc000a62000) (3) Data frame sent\nI0515 22:25:36.052035 3955 log.go:172] (0xc0000f5600) Data frame received for 3\nI0515 22:25:36.052052 3955 log.go:172] (0xc000a62000) (3) Data frame handling\nI0515 22:25:36.054219 3955 log.go:172] (0xc0000f5600) Data frame received for 1\nI0515 22:25:36.054277 3955 log.go:172] (0xc0006a7ae0) (1) Data frame handling\nI0515 22:25:36.054314 3955 log.go:172] (0xc0006a7ae0) (1) Data frame sent\nI0515 22:25:36.054354 3955 log.go:172] (0xc0000f5600) (0xc0006a7ae0) Stream removed, broadcasting: 1\nI0515 22:25:36.054521 3955 log.go:172] (0xc0000f5600) Go away received\nI0515 22:25:36.054914 3955 log.go:172] (0xc0000f5600) (0xc0006a7ae0) Stream removed, broadcasting: 1\nI0515 22:25:36.054939 3955 log.go:172] (0xc0000f5600) (0xc000a62000) Stream removed, broadcasting: 3\nI0515 22:25:36.054951 3955 log.go:172] (0xc0000f5600) (0xc0006a7cc0) Stream removed, broadcasting: 5\n" May 15 22:25:36.059: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 22:25:36.059: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 15 22:25:46.092: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 15 22:25:56.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8000 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 22:25:56.472: INFO: stderr: "I0515 22:25:56.366173 3977 log.go:172] (0xc00020ed10) (0xc000924280) Create stream\nI0515 22:25:56.366231 3977 log.go:172] (0xc00020ed10) (0xc000924280) Stream added, broadcasting: 1\nI0515 22:25:56.368840 3977 log.go:172] (0xc00020ed10) Reply frame received for 1\nI0515 22:25:56.368899 3977 log.go:172] (0xc00020ed10) (0xc0009243c0) Create stream\nI0515 22:25:56.368917 3977 log.go:172] (0xc00020ed10) (0xc0009243c0) Stream added, broadcasting: 3\nI0515 22:25:56.370012 3977 log.go:172] (0xc00020ed10) Reply frame received for 3\nI0515 22:25:56.370080 3977 log.go:172] (0xc00020ed10) (0xc000450820) Create stream\nI0515 22:25:56.370108 3977 log.go:172] (0xc00020ed10) (0xc000450820) Stream added, broadcasting: 5\nI0515 22:25:56.370974 3977 log.go:172] (0xc00020ed10) Reply frame received for 5\nI0515 22:25:56.464224 3977 log.go:172] (0xc00020ed10) Data frame received for 5\nI0515 22:25:56.464257 3977 log.go:172] (0xc000450820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 22:25:56.464295 3977 log.go:172] (0xc00020ed10) Data frame received for 3\nI0515 22:25:56.464350 3977 log.go:172] (0xc0009243c0) (3) Data frame handling\nI0515 22:25:56.464401 3977 log.go:172] (0xc0009243c0) (3) Data frame sent\nI0515 22:25:56.464430 3977 log.go:172] (0xc00020ed10) Data frame received for 3\nI0515 22:25:56.464447 3977 log.go:172] (0xc0009243c0) (3) Data frame handling\nI0515 22:25:56.464477 3977 log.go:172] (0xc000450820) (5) Data frame sent\nI0515 22:25:56.464496 3977 log.go:172] (0xc00020ed10) Data frame received for 5\nI0515 22:25:56.464507 3977 log.go:172] (0xc000450820) (5) Data frame handling\nI0515 22:25:56.465952 3977 log.go:172] (0xc00020ed10) Data frame received for 1\nI0515 22:25:56.466062 3977 log.go:172] (0xc000924280) (1) Data frame handling\nI0515 22:25:56.466143 3977 log.go:172] (0xc000924280) (1) Data frame sent\nI0515 22:25:56.466179 3977 log.go:172] (0xc00020ed10) (0xc000924280) Stream removed, broadcasting: 1\nI0515 22:25:56.466204 3977 log.go:172] (0xc00020ed10) Go away received\nI0515 22:25:56.466684 3977 log.go:172] (0xc00020ed10) (0xc000924280) Stream removed, broadcasting: 1\nI0515 22:25:56.466724 3977 log.go:172] (0xc00020ed10) (0xc0009243c0) Stream removed, broadcasting: 3\nI0515 22:25:56.466749 3977 log.go:172] (0xc00020ed10) (0xc000450820) Stream removed, broadcasting: 5\n" May 15 22:25:56.472: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 22:25:56.472: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 22:26:06.489: INFO: Waiting for StatefulSet statefulset-8000/ss2 to complete update May 15 22:26:06.489: INFO: Waiting for Pod statefulset-8000/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 22:26:06.489: INFO: Waiting for Pod statefulset-8000/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 22:26:06.489: INFO: Waiting for Pod statefulset-8000/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 15 22:26:16.495: INFO: Waiting for StatefulSet statefulset-8000/ss2 to complete update May 15 22:26:16.495: INFO: Waiting for Pod statefulset-8000/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 15 22:26:26.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8000 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 15 22:26:26.782: INFO: stderr: "I0515 22:26:26.636335 3999 log.go:172] (0xc000996000) (0xc0006b0460) Create stream\nI0515 22:26:26.636399 3999 log.go:172] (0xc000996000) (0xc0006b0460) Stream added, broadcasting: 1\nI0515 22:26:26.638377 3999 log.go:172] (0xc000996000) Reply frame received for 1\nI0515 22:26:26.638668 3999 log.go:172] (0xc000996000) (0xc00088a000) Create stream\nI0515 22:26:26.638748 3999 log.go:172] (0xc000996000) (0xc00088a000) Stream added, broadcasting: 3\nI0515 22:26:26.639911 3999 log.go:172] (0xc000996000) Reply frame received for 3\nI0515 22:26:26.639977 3999 log.go:172] (0xc000996000) (0xc000986000) Create stream\nI0515 22:26:26.640313 3999 log.go:172] (0xc000996000) (0xc000986000) Stream added, broadcasting: 5\nI0515 22:26:26.641087 3999 log.go:172] (0xc000996000) Reply frame received for 5\nI0515 22:26:26.739963 3999 log.go:172] (0xc000996000) Data frame received for 5\nI0515 22:26:26.739985 3999 log.go:172] (0xc000986000) (5) Data frame handling\nI0515 22:26:26.739999 3999 log.go:172] (0xc000986000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0515 22:26:26.774314 3999 log.go:172] (0xc000996000) Data frame received for 3\nI0515 22:26:26.774356 3999 log.go:172] (0xc00088a000) (3) Data frame handling\nI0515 22:26:26.774394 3999 log.go:172] (0xc00088a000) (3) Data frame sent\nI0515 22:26:26.774674 3999 log.go:172] (0xc000996000) Data frame received for 5\nI0515 22:26:26.774690 3999 log.go:172] (0xc000986000) (5) Data frame handling\nI0515 22:26:26.775161 3999 log.go:172] (0xc000996000) Data frame received for 3\nI0515 22:26:26.775193 3999 log.go:172] (0xc00088a000) (3) Data frame handling\nI0515 22:26:26.776887 3999 log.go:172] (0xc000996000) Data frame received for 1\nI0515 22:26:26.776922 3999 log.go:172] (0xc0006b0460) (1) Data frame handling\nI0515 22:26:26.776944 3999 log.go:172] (0xc0006b0460) (1) Data frame sent\nI0515 22:26:26.776971 3999 log.go:172] (0xc000996000) (0xc0006b0460) Stream removed, broadcasting: 1\nI0515 22:26:26.777010 3999 log.go:172] (0xc000996000) Go away received\nI0515 22:26:26.777449 3999 log.go:172] (0xc000996000) (0xc0006b0460) Stream removed, broadcasting: 1\nI0515 22:26:26.777474 3999 log.go:172] (0xc000996000) (0xc00088a000) Stream removed, broadcasting: 3\nI0515 22:26:26.777487 3999 log.go:172] (0xc000996000) (0xc000986000) Stream removed, broadcasting: 5\n" May 15 22:26:26.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 15 22:26:26.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 15 22:26:36.825: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 15 22:26:46.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8000 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 15 22:26:47.153: INFO: stderr: "I0515 22:26:47.059187 4020 log.go:172] (0xc00063a2c0) (0xc00023b400) Create stream\nI0515 22:26:47.059250 4020 log.go:172] (0xc00063a2c0) (0xc00023b400) Stream added, broadcasting: 1\nI0515 22:26:47.061895 4020 log.go:172] (0xc00063a2c0) Reply frame received for 1\nI0515 22:26:47.061940 4020 log.go:172] (0xc00063a2c0) (0xc000aa4000) Create stream\nI0515 22:26:47.061959 4020 log.go:172] (0xc00063a2c0) (0xc000aa4000) Stream added, broadcasting: 3\nI0515 22:26:47.063232 4020 log.go:172] (0xc00063a2c0) Reply frame received for 3\nI0515 22:26:47.063290 4020 log.go:172] (0xc00063a2c0) (0xc000998000) Create stream\nI0515 22:26:47.063311 4020 log.go:172] (0xc00063a2c0) (0xc000998000) Stream added, broadcasting: 5\nI0515 22:26:47.064236 4020 log.go:172] (0xc00063a2c0) Reply frame received for 5\nI0515 22:26:47.147141 4020 log.go:172] (0xc00063a2c0) Data frame received for 5\nI0515 22:26:47.147174 4020 log.go:172] (0xc000998000) (5) Data frame handling\nI0515 22:26:47.147188 4020 log.go:172] (0xc000998000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0515 22:26:47.147200 4020 log.go:172] (0xc00063a2c0) Data frame received for 5\nI0515 22:26:47.147242 4020 log.go:172] (0xc000998000) (5) Data frame handling\nI0515 22:26:47.147279 4020 log.go:172] (0xc00063a2c0) Data frame received for 3\nI0515 22:26:47.147319 4020 log.go:172] (0xc000aa4000) (3) Data frame handling\nI0515 22:26:47.147334 4020 log.go:172] (0xc000aa4000) (3) Data frame sent\nI0515 22:26:47.147344 4020 log.go:172] (0xc00063a2c0) Data frame received for 3\nI0515 22:26:47.147351 4020 log.go:172] (0xc000aa4000) (3) Data frame handling\nI0515 22:26:47.148527 4020 log.go:172] (0xc00063a2c0) Data frame received for 1\nI0515 22:26:47.148551 4020 log.go:172] (0xc00023b400) (1) Data frame handling\nI0515 22:26:47.148564 4020 log.go:172] (0xc00023b400) (1) Data frame sent\nI0515 22:26:47.148585 4020 log.go:172] (0xc00063a2c0) (0xc00023b400) Stream removed, broadcasting: 1\nI0515 22:26:47.148608 4020 log.go:172] (0xc00063a2c0) Go away received\nI0515 22:26:47.148969 4020 log.go:172] (0xc00063a2c0) (0xc00023b400) Stream removed, broadcasting: 1\nI0515 22:26:47.148988 4020 log.go:172] (0xc00063a2c0) (0xc000aa4000) Stream removed, broadcasting: 3\nI0515 22:26:47.148996 4020 log.go:172] (0xc00063a2c0) (0xc000998000) Stream removed, broadcasting: 5\n" May 15 22:26:47.153: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 15 22:26:47.154: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 15 22:26:57.174: INFO: Waiting for StatefulSet statefulset-8000/ss2 to complete update May 15 22:26:57.174: INFO: Waiting for Pod statefulset-8000/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 15 22:26:57.174: INFO: Waiting for Pod statefulset-8000/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 15 22:26:57.174: INFO: Waiting for Pod statefulset-8000/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 15 22:27:07.188: INFO: Waiting for StatefulSet statefulset-8000/ss2 to complete update May 15 22:27:07.188: INFO: Waiting for Pod statefulset-8000/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 15 22:27:17.182: INFO: Waiting for StatefulSet statefulset-8000/ss2 to complete update May 15 22:27:17.182: INFO: Waiting for Pod statefulset-8000/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 15 22:27:27.183: INFO: Deleting all statefulset in ns statefulset-8000 May 15 22:27:27.187: INFO: Scaling statefulset ss2 to 0 May 15 22:27:47.214: INFO: Waiting for statefulset status.replicas updated to 0 May 15 22:27:47.217: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:27:47.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8000" for this suite. • [SLOW TEST:151.529 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":266,"skipped":4394,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:27:47.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 15 22:27:47.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3188' May 15 22:27:47.659: INFO: stderr: "" May 15 22:27:47.659: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 15 22:27:47.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3188' May 15 22:27:47.777: INFO: stderr: "" May 15 22:27:47.777: INFO: stdout: "update-demo-nautilus-jh8fm update-demo-nautilus-wskpt " May 15 22:27:47.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jh8fm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3188' May 15 22:27:47.863: INFO: stderr: "" May 15 22:27:47.863: INFO: stdout: "" May 15 22:27:47.863: INFO: update-demo-nautilus-jh8fm is created but not running May 15 22:27:52.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3188' May 15 22:27:52.996: INFO: stderr: "" May 15 22:27:52.996: INFO: stdout: "update-demo-nautilus-jh8fm update-demo-nautilus-wskpt " May 15 22:27:52.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jh8fm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3188' May 15 22:27:53.110: INFO: stderr: "" May 15 22:27:53.110: INFO: stdout: "true" May 15 22:27:53.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jh8fm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3188' May 15 22:27:53.230: INFO: stderr: "" May 15 22:27:53.230: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:27:53.230: INFO: validating pod update-demo-nautilus-jh8fm May 15 22:27:53.240: INFO: got data: { "image": "nautilus.jpg" } May 15 22:27:53.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:27:53.240: INFO: update-demo-nautilus-jh8fm is verified up and running May 15 22:27:53.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wskpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3188' May 15 22:27:53.350: INFO: stderr: "" May 15 22:27:53.350: INFO: stdout: "true" May 15 22:27:53.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wskpt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3188' May 15 22:27:53.441: INFO: stderr: "" May 15 22:27:53.441: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 15 22:27:53.441: INFO: validating pod update-demo-nautilus-wskpt May 15 22:27:53.445: INFO: got data: { "image": "nautilus.jpg" } May 15 22:27:53.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 15 22:27:53.445: INFO: update-demo-nautilus-wskpt is verified up and running STEP: using delete to clean up resources May 15 22:27:53.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3188' May 15 22:27:53.570: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:27:53.570: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 15 22:27:53.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3188' May 15 22:27:53.734: INFO: stderr: "No resources found in kubectl-3188 namespace.\n" May 15 22:27:53.734: INFO: stdout: "" May 15 22:27:53.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3188 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 15 22:27:53.836: INFO: stderr: "" May 15 22:27:53.836: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:27:53.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3188" for this suite. • [SLOW TEST:6.604 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":267,"skipped":4395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:27:53.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 15 22:27:54.122: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 15 22:27:54.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:54.943: INFO: stderr: "" May 15 22:27:54.943: INFO: stdout: "service/agnhost-slave created\n" May 15 22:27:54.943: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 15 22:27:54.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:55.242: INFO: stderr: "" May 15 22:27:55.242: INFO: stdout: "service/agnhost-master created\n" May 15 22:27:55.243: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 15 22:27:55.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:55.647: INFO: stderr: "" May 15 22:27:55.647: INFO: stdout: "service/frontend created\n" May 15 22:27:55.648: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 15 22:27:55.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:55.868: INFO: stderr: "" May 15 22:27:55.868: INFO: stdout: "deployment.apps/frontend created\n" May 15 22:27:55.868: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 15 22:27:55.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:56.228: INFO: stderr: "" May 15 22:27:56.228: INFO: stdout: "deployment.apps/agnhost-master created\n" May 15 22:27:56.229: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 15 22:27:56.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1953' May 15 22:27:56.480: INFO: stderr: "" May 15 22:27:56.480: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 15 22:27:56.480: INFO: Waiting for all frontend pods to be Running. May 15 22:28:06.531: INFO: Waiting for frontend to serve content. May 15 22:28:06.541: INFO: Trying to add a new entry to the guestbook. May 15 22:28:06.550: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 15 22:28:06.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:06.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:06.727: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 15 22:28:06.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:06.949: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:06.949: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 15 22:28:06.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:07.140: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:07.141: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 22:28:07.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:07.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:07.277: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 15 22:28:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:07.369: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:07.369: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 15 22:28:07.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1953' May 15 22:28:07.466: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 15 22:28:07.466: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:28:07.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1953" for this suite. • [SLOW TEST:13.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":268,"skipped":4418,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:28:07.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 15 22:28:07.599: INFO: Waiting up to 5m0s for pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c" in namespace "emptydir-5409" to be "success or failure" May 15 22:28:07.642: INFO: Pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.472331ms May 15 22:28:09.805: INFO: Pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205401675s May 15 22:28:11.809: INFO: Pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209660772s May 15 22:28:13.813: INFO: Pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.213390369s STEP: Saw pod success May 15 22:28:13.813: INFO: Pod "pod-3e3f8668-7034-4b3a-9a27-392e17aa360c" satisfied condition "success or failure" May 15 22:28:13.815: INFO: Trying to get logs from node jerma-worker2 pod pod-3e3f8668-7034-4b3a-9a27-392e17aa360c container test-container: STEP: delete the pod May 15 22:28:13.857: INFO: Waiting for pod pod-3e3f8668-7034-4b3a-9a27-392e17aa360c to disappear May 15 22:28:13.871: INFO: Pod pod-3e3f8668-7034-4b3a-9a27-392e17aa360c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:28:13.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5409" for this suite. • [SLOW TEST:6.403 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4426,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:28:13.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-99e86d6e-01f4-4aab-ad04-76dfc471e3ab STEP: Creating configMap with name cm-test-opt-upd-be73508a-122f-4c5a-82a4-cacbafd028c9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-99e86d6e-01f4-4aab-ad04-76dfc471e3ab STEP: Updating configmap cm-test-opt-upd-be73508a-122f-4c5a-82a4-cacbafd028c9 STEP: Creating configMap with name cm-test-opt-create-4e8836e9-9565-4cf4-811a-b0aaf1aa1333 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:29:40.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1174" for this suite. • [SLOW TEST:86.568 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4427,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:29:40.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 15 22:29:40.543: INFO: Waiting up to 5m0s for pod "pod-a042736a-7f4e-458f-bebe-212d15664dad" in namespace "emptydir-1094" to be "success or failure" May 15 22:29:40.561: INFO: Pod "pod-a042736a-7f4e-458f-bebe-212d15664dad": Phase="Pending", Reason="", readiness=false. Elapsed: 18.545514ms May 15 22:29:42.695: INFO: Pod "pod-a042736a-7f4e-458f-bebe-212d15664dad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152443631s May 15 22:29:44.699: INFO: Pod "pod-a042736a-7f4e-458f-bebe-212d15664dad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156515476s STEP: Saw pod success May 15 22:29:44.699: INFO: Pod "pod-a042736a-7f4e-458f-bebe-212d15664dad" satisfied condition "success or failure" May 15 22:29:44.702: INFO: Trying to get logs from node jerma-worker2 pod pod-a042736a-7f4e-458f-bebe-212d15664dad container test-container: STEP: delete the pod May 15 22:29:44.961: INFO: Waiting for pod pod-a042736a-7f4e-458f-bebe-212d15664dad to disappear May 15 22:29:45.106: INFO: Pod pod-a042736a-7f4e-458f-bebe-212d15664dad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:29:45.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1094" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4459,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:29:45.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 15 22:29:45.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 15 22:29:47.907: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178586, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 15 22:29:49.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178586, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725178585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 15 22:29:52.941: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:29:52.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1385-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:29:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8778" for this suite. STEP: Destroying namespace "webhook-8778-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.100 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":272,"skipped":4481,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:29:54.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 15 22:29:54.292: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b" in namespace "downward-api-2085" to be "success or failure" May 15 22:29:54.296: INFO: Pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043165ms May 15 22:29:56.303: INFO: Pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010441201s May 15 22:29:58.307: INFO: Pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b": Phase="Running", Reason="", readiness=true. Elapsed: 4.014524136s May 15 22:30:00.311: INFO: Pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018947481s STEP: Saw pod success May 15 22:30:00.311: INFO: Pod "downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b" satisfied condition "success or failure" May 15 22:30:00.315: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b container client-container: STEP: delete the pod May 15 22:30:00.381: INFO: Waiting for pod downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b to disappear May 15 22:30:00.384: INFO: Pod downwardapi-volume-79dccba5-c9c6-4338-8991-71d9bc75b44b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:30:00.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2085" for this suite. • [SLOW TEST:6.176 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4492,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:30:00.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 15 22:30:00.436: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 15 22:30:02.485: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:30:03.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-873" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":274,"skipped":4505,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:30:03.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-c3300a33-18db-4185-b9bd-9c2511eab1d0 in namespace container-probe-465 May 15 22:30:07.907: INFO: Started pod liveness-c3300a33-18db-4185-b9bd-9c2511eab1d0 in namespace container-probe-465 STEP: checking the pod's current state and verifying that restartCount is present May 15 22:30:07.910: INFO: Initial restart count of pod liveness-c3300a33-18db-4185-b9bd-9c2511eab1d0 is 0 May 15 22:30:29.979: INFO: Restart count of pod container-probe-465/liveness-c3300a33-18db-4185-b9bd-9c2511eab1d0 is now 1 (22.068981161s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:30:30.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-465" for this suite. • [SLOW TEST:26.512 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4505,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:30:30.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-08df9c3b-36b2-447d-a2c0-0ddacad2d2b8 STEP: Creating a pod to test consume secrets May 15 22:30:30.138: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a" in namespace "projected-5951" to be "success or failure" May 15 22:30:30.329: INFO: Pod "pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 191.587789ms May 15 22:30:32.340: INFO: Pod "pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202246373s May 15 22:30:34.343: INFO: Pod "pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205483429s STEP: Saw pod success May 15 22:30:34.343: INFO: Pod "pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a" satisfied condition "success or failure" May 15 22:30:34.346: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a container projected-secret-volume-test: STEP: delete the pod May 15 22:30:34.370: INFO: Waiting for pod pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a to disappear May 15 22:30:34.393: INFO: Pod pod-projected-secrets-f5652f2d-e7c4-4141-b6f0-090d4b7ed69a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:30:34.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5951" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4542,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:30:34.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 15 22:30:42.535: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 22:30:42.544: INFO: Pod pod-with-poststart-exec-hook still exists May 15 22:30:44.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 22:30:44.548: INFO: Pod pod-with-poststart-exec-hook still exists May 15 22:30:46.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 22:30:46.547: INFO: Pod pod-with-poststart-exec-hook still exists May 15 22:30:48.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 22:30:48.550: INFO: Pod pod-with-poststart-exec-hook still exists May 15 22:30:50.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 15 22:30:50.548: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:30:50.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-762" for this suite. • [SLOW TEST:16.158 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4554,"failed":0} [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 15 22:30:50.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 15 22:30:50.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2864' May 15 22:30:50.715: INFO: stderr: "" May 15 22:30:50.715: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 15 22:30:55.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2864 -o json' May 15 22:30:55.989: INFO: stderr: "" May 15 22:30:55.989: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-15T22:30:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2864\",\n \"resourceVersion\": \"16491239\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2864/pods/e2e-test-httpd-pod\",\n \"uid\": \"27dc46e0-aa78-467f-8389-acff4c658495\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-b7h96\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-b7h96\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-b7h96\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T22:30:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T22:30:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T22:30:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-15T22:30:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f225b8176674074a35cbb5bfb3a66051d50cb90b6816870f526688d628df0179\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-15T22:30:53Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.10\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.10\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-15T22:30:50Z\"\n }\n}\n" STEP: replace the image in the pod May 15 22:30:55.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2864' May 15 22:30:56.384: INFO: stderr: "" May 15 22:30:56.384: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 15 22:30:56.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2864' May 15 22:31:09.484: INFO: stderr: "" May 15 22:31:09.484: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 15 22:31:09.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2864" for this suite. • [SLOW TEST:18.935 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":278,"skipped":4554,"failed":0} SSSSSSSSSSMay 15 22:31:09.493: INFO: Running AfterSuite actions on all nodes May 15 22:31:09.493: INFO: Running AfterSuite actions on node 1 May 15 22:31:09.493: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4857.374 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS