I0311 12:55:23.790019 6 e2e.go:243] Starting e2e run "6869a2d0-89c9-4ec5-9fe0-59252ed61a50" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583931322 - Will randomize all specs Will run 215 of 4412 specs Mar 11 12:55:23.983: INFO: >>> kubeConfig: /root/.kube/config Mar 11 12:55:23.986: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 11 12:55:24.002: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 11 12:55:24.027: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 11 12:55:24.027: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 11 12:55:24.027: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 11 12:55:24.035: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 11 12:55:24.035: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 11 12:55:24.035: INFO: e2e test version: v1.15.10 Mar 11 12:55:24.036: INFO: kube-apiserver version: v1.15.7 SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:55:24.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Mar 11 12:55:24.091: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 11 12:55:24.127: INFO: Waiting up to 5m0s for pod "pod-37ca0a92-523e-4cc0-9e46-d795a4180a79" in namespace "emptydir-2167" to be "success or failure" Mar 11 12:55:24.139: INFO: Pod "pod-37ca0a92-523e-4cc0-9e46-d795a4180a79": Phase="Pending", Reason="", readiness=false. Elapsed: 11.812558ms Mar 11 12:55:26.145: INFO: Pod "pod-37ca0a92-523e-4cc0-9e46-d795a4180a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018139107s STEP: Saw pod success Mar 11 12:55:26.145: INFO: Pod "pod-37ca0a92-523e-4cc0-9e46-d795a4180a79" satisfied condition "success or failure" Mar 11 12:55:26.149: INFO: Trying to get logs from node iruya-worker pod pod-37ca0a92-523e-4cc0-9e46-d795a4180a79 container test-container: STEP: delete the pod Mar 11 12:55:26.193: INFO: Waiting for pod pod-37ca0a92-523e-4cc0-9e46-d795a4180a79 to disappear Mar 11 12:55:26.199: INFO: Pod pod-37ca0a92-523e-4cc0-9e46-d795a4180a79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:55:26.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2167" for this suite. Mar 11 12:55:32.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:55:32.279: INFO: namespace emptydir-2167 deletion completed in 6.077002366s • [SLOW TEST:8.244 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:55:32.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 12:55:32.323: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.269458ms) Mar 11 12:55:32.326: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.67528ms) Mar 11 12:55:32.328: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.570348ms) Mar 11 12:55:32.331: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.599214ms) Mar 11 12:55:32.333: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.032933ms) Mar 11 12:55:32.335: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.853985ms) Mar 11 12:55:32.337: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.849379ms) Mar 11 12:55:32.339: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.277914ms) Mar 11 12:55:32.341: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.5665ms) Mar 11 12:55:32.344: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.01741ms) Mar 11 12:55:32.346: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.717032ms) Mar 11 12:55:32.348: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.169978ms) Mar 11 12:55:32.351: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.175292ms) Mar 11 12:55:32.353: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.917829ms) Mar 11 12:55:32.354: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.806844ms) Mar 11 12:55:32.356: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.850663ms) Mar 11 12:55:32.377: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 20.742906ms) Mar 11 12:55:32.380: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.097929ms) Mar 11 12:55:32.383: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.608437ms) Mar 11 12:55:32.385: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.474962ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:55:32.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4804" for this suite. Mar 11 12:55:38.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:55:38.498: INFO: namespace proxy-4804 deletion completed in 6.109529459s • [SLOW TEST:6.218 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:55:38.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:55:40.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9526" for this suite. Mar 11 12:55:46.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:55:46.744: INFO: namespace emptydir-wrapper-9526 deletion completed in 6.093455024s • [SLOW TEST:8.246 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:55:46.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 12:55:46.828: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e" in namespace "downward-api-220" to be "success or failure" Mar 11 12:55:46.835: INFO: Pod "downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956935ms Mar 11 12:55:48.838: INFO: Pod "downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009481472s STEP: Saw pod success Mar 11 12:55:48.838: INFO: Pod "downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e" satisfied condition "success or failure" Mar 11 12:55:48.840: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e container client-container: STEP: delete the pod Mar 11 12:55:48.881: INFO: Waiting for pod downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e to disappear Mar 11 12:55:48.888: INFO: Pod downwardapi-volume-ba31b65b-d93a-4d82-a053-e0cc5b6dd50e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:55:48.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-220" for this suite. Mar 11 12:55:54.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:55:54.981: INFO: namespace downward-api-220 deletion completed in 6.090583153s • [SLOW TEST:8.236 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:55:54.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0311 12:56:25.583469 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 12:56:25.583: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:56:25.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8279" for this suite. Mar 11 12:56:31.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:56:31.713: INFO: namespace gc-8279 deletion completed in 6.126085296s • [SLOW TEST:36.731 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:56:31.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Mar 11 12:56:31.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-11' Mar 11 12:56:33.410: INFO: stderr: "" Mar 11 12:56:33.410: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:56:33.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' Mar 11 12:56:33.507: INFO: stderr: "" Mar 11 12:56:33.507: INFO: stdout: "update-demo-nautilus-kdpv4 update-demo-nautilus-n4l7g " Mar 11 12:56:33.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdpv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:56:33.590: INFO: stderr: "" Mar 11 12:56:33.590: INFO: stdout: "" Mar 11 12:56:33.590: INFO: update-demo-nautilus-kdpv4 is created but not running Mar 11 12:56:38.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' Mar 11 12:56:38.683: INFO: stderr: "" Mar 11 12:56:38.683: INFO: stdout: "update-demo-nautilus-kdpv4 update-demo-nautilus-n4l7g " Mar 11 12:56:38.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdpv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:56:38.757: INFO: stderr: "" Mar 11 12:56:38.757: INFO: stdout: "true" Mar 11 12:56:38.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kdpv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:56:38.826: INFO: stderr: "" Mar 11 12:56:38.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:56:38.826: INFO: validating pod update-demo-nautilus-kdpv4 Mar 11 12:56:38.829: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:56:38.829: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:56:38.829: INFO: update-demo-nautilus-kdpv4 is verified up and running Mar 11 12:56:38.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4l7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:56:38.894: INFO: stderr: "" Mar 11 12:56:38.894: INFO: stdout: "true" Mar 11 12:56:38.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n4l7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:56:38.960: INFO: stderr: "" Mar 11 12:56:38.960: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 12:56:38.960: INFO: validating pod update-demo-nautilus-n4l7g Mar 11 12:56:38.990: INFO: got data: { "image": "nautilus.jpg" } Mar 11 12:56:38.990: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 12:56:38.990: INFO: update-demo-nautilus-n4l7g is verified up and running STEP: rolling-update to new replication controller Mar 11 12:56:39.054: INFO: scanned /root for discovery docs: Mar 11 12:56:39.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-11' Mar 11 12:57:01.599: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 11 12:57:01.599: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 12:57:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-11' Mar 11 12:57:01.713: INFO: stderr: "" Mar 11 12:57:01.713: INFO: stdout: "update-demo-kitten-2fjq5 update-demo-kitten-mjttm " Mar 11 12:57:01.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2fjq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:57:01.806: INFO: stderr: "" Mar 11 12:57:01.806: INFO: stdout: "true" Mar 11 12:57:01.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2fjq5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:57:01.882: INFO: stderr: "" Mar 11 12:57:01.882: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 11 12:57:01.882: INFO: validating pod update-demo-kitten-2fjq5 Mar 11 12:57:01.885: INFO: got data: { "image": "kitten.jpg" } Mar 11 12:57:01.885: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 11 12:57:01.885: INFO: update-demo-kitten-2fjq5 is verified up and running Mar 11 12:57:01.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mjttm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:57:01.974: INFO: stderr: "" Mar 11 12:57:01.974: INFO: stdout: "true" Mar 11 12:57:01.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mjttm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-11' Mar 11 12:57:02.047: INFO: stderr: "" Mar 11 12:57:02.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 11 12:57:02.047: INFO: validating pod update-demo-kitten-mjttm Mar 11 12:57:02.050: INFO: got data: { "image": "kitten.jpg" } Mar 11 12:57:02.050: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 11 12:57:02.050: INFO: update-demo-kitten-mjttm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:57:02.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-11" for this suite. Mar 11 12:57:24.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:57:24.148: INFO: namespace kubectl-11 deletion completed in 22.096052102s • [SLOW TEST:52.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:57:24.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-101adfd1-e257-4c99-bfa4-24d52d0aec27 STEP: Creating a pod to test consume secrets Mar 11 12:57:24.238: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587" in namespace "projected-4088" to be "success or failure" Mar 11 12:57:24.257: INFO: Pod "pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587": Phase="Pending", Reason="", readiness=false. Elapsed: 18.992704ms Mar 11 12:57:26.261: INFO: Pod "pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022388045s STEP: Saw pod success Mar 11 12:57:26.261: INFO: Pod "pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587" satisfied condition "success or failure" Mar 11 12:57:26.264: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587 container secret-volume-test: STEP: delete the pod Mar 11 12:57:26.295: INFO: Waiting for pod pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587 to disappear Mar 11 12:57:26.296: INFO: Pod pod-projected-secrets-2dbbbaa4-9b5d-48ac-965d-f7f0dd8df587 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:57:26.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4088" for this suite. Mar 11 12:57:32.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:57:32.377: INFO: namespace projected-4088 deletion completed in 6.077983315s • [SLOW TEST:8.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:57:32.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Mar 11 12:57:32.440: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:57:32.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2978" for this suite. Mar 11 12:57:38.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:57:38.593: INFO: namespace kubectl-2978 deletion completed in 6.067270977s • [SLOW TEST:6.215 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:57:38.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 11 12:57:43.198: INFO: Successfully updated pod "labelsupdate79fe0d37-e698-4758-9654-841f305b2b98" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 12:57:45.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5946" for this suite. Mar 11 12:58:07.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 12:58:07.317: INFO: namespace downward-api-5946 deletion completed in 22.09511907s • [SLOW TEST:28.724 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 12:58:07.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 in namespace container-probe-2118 Mar 11 12:58:09.398: INFO: Started pod liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 in namespace container-probe-2118 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 12:58:09.400: INFO: Initial restart count of pod liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is 0 Mar 11 12:58:21.427: INFO: Restart count of pod container-probe-2118/liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is now 1 (12.026550036s elapsed) Mar 11 12:58:41.468: INFO: Restart count of pod container-probe-2118/liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is now 2 (32.067254575s elapsed) Mar 11 12:59:01.507: INFO: Restart count of pod container-probe-2118/liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is now 3 (52.106558966s elapsed) Mar 11 12:59:21.548: INFO: Restart count of pod container-probe-2118/liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is now 4 (1m12.14726104s elapsed) Mar 11 13:00:23.681: INFO: Restart count of pod container-probe-2118/liveness-ee159079-b5a7-4eb0-ac87-36c8244c4a46 is now 5 (2m14.280726139s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:00:23.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2118" for this suite. Mar 11 13:00:29.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:00:29.832: INFO: namespace container-probe-2118 deletion completed in 6.093176147s • [SLOW TEST:142.514 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:00:29.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 13:00:40.214464 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 13:00:40.214: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:00:40.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7648" for this suite. Mar 11 13:00:48.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:00:48.304: INFO: namespace gc-7648 deletion completed in 8.086892564s • [SLOW TEST:18.472 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:00:48.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-51d08d66-7ed6-46fe-83ad-bea78897ed48 STEP: Creating a pod to test consume secrets Mar 11 13:00:48.371: INFO: Waiting up to 5m0s for pod "pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873" in namespace "secrets-7546" to be "success or failure" Mar 11 13:00:48.375: INFO: Pod "pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873": Phase="Pending", Reason="", readiness=false. Elapsed: 3.504259ms Mar 11 13:00:50.378: INFO: Pod "pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006163791s STEP: Saw pod success Mar 11 13:00:50.378: INFO: Pod "pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873" satisfied condition "success or failure" Mar 11 13:00:50.380: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873 container secret-env-test: STEP: delete the pod Mar 11 13:00:50.442: INFO: Waiting for pod pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873 to disappear Mar 11 13:00:50.446: INFO: Pod pod-secrets-8c9e3a6b-4722-469c-a565-a9d2c225a873 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:00:50.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7546" for this suite. Mar 11 13:00:56.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:00:56.551: INFO: namespace secrets-7546 deletion completed in 6.101905669s • [SLOW TEST:8.246 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:00:56.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 11 13:00:56.630: INFO: Waiting up to 5m0s for pod "pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f" in namespace "emptydir-4191" to be "success or failure" Mar 11 13:00:56.648: INFO: Pod "pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.346653ms Mar 11 13:00:58.651: INFO: Pod "pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021776796s STEP: Saw pod success Mar 11 13:00:58.652: INFO: Pod "pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f" satisfied condition "success or failure" Mar 11 13:00:58.654: INFO: Trying to get logs from node iruya-worker2 pod pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f container test-container: STEP: delete the pod Mar 11 13:00:58.666: INFO: Waiting for pod pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f to disappear Mar 11 13:00:58.671: INFO: Pod pod-3a20699e-f00d-4944-b7ea-f9ff358b6c3f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:00:58.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4191" for this suite. Mar 11 13:01:04.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:01:04.797: INFO: namespace emptydir-4191 deletion completed in 6.12271432s • [SLOW TEST:8.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:01:04.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 11 13:01:04.849: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 13:01:04.855: INFO: Waiting for terminating namespaces to be deleted... Mar 11 13:01:04.857: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 11 13:01:04.861: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:01:04.862: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 13:01:04.862: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:01:04.862: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 13:01:04.862: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 11 13:01:04.866: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:01:04.866: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 13:01:04.866: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:01:04.866: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fb417509c02a94], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:01:05.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6956" for this suite. Mar 11 13:01:11.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:01:11.975: INFO: namespace sched-pred-6956 deletion completed in 6.088642258s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:01:11.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-4x28 STEP: Creating a pod to test atomic-volume-subpath Mar 11 13:01:12.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-4x28" in namespace "subpath-8208" to be "success or failure" Mar 11 13:01:12.043: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161733ms Mar 11 13:01:14.058: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019253493s Mar 11 13:01:16.061: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 4.022172673s Mar 11 13:01:18.065: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 6.026388219s Mar 11 13:01:20.070: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 8.031034457s Mar 11 13:01:22.074: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 10.035307382s Mar 11 13:01:24.077: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 12.038392579s Mar 11 13:01:26.081: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 14.041885134s Mar 11 13:01:28.085: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 16.045701077s Mar 11 13:01:30.089: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 18.049792413s Mar 11 13:01:32.093: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 20.054179763s Mar 11 13:01:34.097: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Running", Reason="", readiness=true. Elapsed: 22.057824834s Mar 11 13:01:36.101: INFO: Pod "pod-subpath-test-projected-4x28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061976745s STEP: Saw pod success Mar 11 13:01:36.101: INFO: Pod "pod-subpath-test-projected-4x28" satisfied condition "success or failure" Mar 11 13:01:36.104: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-4x28 container test-container-subpath-projected-4x28: STEP: delete the pod Mar 11 13:01:36.144: INFO: Waiting for pod pod-subpath-test-projected-4x28 to disappear Mar 11 13:01:36.148: INFO: Pod pod-subpath-test-projected-4x28 no longer exists STEP: Deleting pod pod-subpath-test-projected-4x28 Mar 11 13:01:36.148: INFO: Deleting pod "pod-subpath-test-projected-4x28" in namespace "subpath-8208" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:01:36.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8208" for this suite. Mar 11 13:01:42.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:01:42.261: INFO: namespace subpath-8208 deletion completed in 6.107102231s • [SLOW TEST:30.286 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:01:42.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Mar 11 13:01:42.308: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix988314122/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:01:42.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2621" for this suite. Mar 11 13:01:48.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:01:48.446: INFO: namespace kubectl-2621 deletion completed in 6.083023147s • [SLOW TEST:6.185 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:01:48.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:01:48.485: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:01:49.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7894" for this suite. Mar 11 13:01:55.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:01:55.702: INFO: namespace custom-resource-definition-7894 deletion completed in 6.140954618s • [SLOW TEST:7.256 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:01:55.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:01:55.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b" in namespace "downward-api-8580" to be "success or failure" Mar 11 13:01:55.810: INFO: Pod "downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710336ms Mar 11 13:01:57.813: INFO: Pod "downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007120881s STEP: Saw pod success Mar 11 13:01:57.813: INFO: Pod "downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b" satisfied condition "success or failure" Mar 11 13:01:57.816: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b container client-container: STEP: delete the pod Mar 11 13:01:57.884: INFO: Waiting for pod downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b to disappear Mar 11 13:01:57.888: INFO: Pod downwardapi-volume-f45a9067-a77d-4df8-aa1f-b4379f17477b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:01:57.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8580" for this suite. Mar 11 13:02:03.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:02:03.982: INFO: namespace downward-api-8580 deletion completed in 6.091055527s • [SLOW TEST:8.280 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:02:03.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9364 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 13:02:04.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 13:02:26.157: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.198 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9364 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:02:26.157: INFO: >>> kubeConfig: /root/.kube/config I0311 13:02:26.192072 6 log.go:172] (0xc000b6f130) (0xc00197f180) Create stream I0311 13:02:26.192101 6 log.go:172] (0xc000b6f130) (0xc00197f180) Stream added, broadcasting: 1 I0311 13:02:26.197060 6 log.go:172] (0xc000b6f130) Reply frame received for 1 I0311 13:02:26.197096 6 log.go:172] (0xc000b6f130) (0xc00197f220) Create stream I0311 13:02:26.197103 6 log.go:172] (0xc000b6f130) (0xc00197f220) Stream added, broadcasting: 3 I0311 13:02:26.199037 6 log.go:172] (0xc000b6f130) Reply frame received for 3 I0311 13:02:26.199064 6 log.go:172] (0xc000b6f130) (0xc00197f2c0) Create stream I0311 13:02:26.199075 6 log.go:172] (0xc000b6f130) (0xc00197f2c0) Stream added, broadcasting: 5 I0311 13:02:26.200167 6 log.go:172] (0xc000b6f130) Reply frame received for 5 I0311 13:02:27.268334 6 log.go:172] (0xc000b6f130) Data frame received for 5 I0311 13:02:27.268364 6 log.go:172] (0xc00197f2c0) (5) Data frame handling I0311 13:02:27.268450 6 log.go:172] (0xc000b6f130) Data frame received for 3 I0311 13:02:27.268540 6 log.go:172] (0xc00197f220) (3) Data frame handling I0311 13:02:27.268584 6 log.go:172] (0xc00197f220) (3) Data frame sent I0311 13:02:27.268626 6 log.go:172] (0xc000b6f130) Data frame received for 3 I0311 13:02:27.268663 6 log.go:172] (0xc00197f220) (3) Data frame handling I0311 13:02:27.270322 6 log.go:172] (0xc000b6f130) Data frame received for 1 I0311 13:02:27.270350 6 log.go:172] (0xc00197f180) (1) Data frame handling I0311 13:02:27.270368 6 log.go:172] (0xc00197f180) (1) Data frame sent I0311 13:02:27.270391 6 log.go:172] (0xc000b6f130) (0xc00197f180) Stream removed, broadcasting: 1 I0311 13:02:27.270504 6 log.go:172] (0xc000b6f130) (0xc00197f180) Stream removed, broadcasting: 1 I0311 13:02:27.270522 6 log.go:172] (0xc000b6f130) (0xc00197f220) Stream removed, broadcasting: 3 I0311 13:02:27.270584 6 log.go:172] (0xc000b6f130) Go away received I0311 13:02:27.270767 6 log.go:172] (0xc000b6f130) (0xc00197f2c0) Stream removed, broadcasting: 5 Mar 11 13:02:27.270: INFO: Found all expected endpoints: [netserver-0] Mar 11 13:02:27.275: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.43 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9364 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:02:27.275: INFO: >>> kubeConfig: /root/.kube/config I0311 13:02:27.317062 6 log.go:172] (0xc0017aa8f0) (0xc00055e6e0) Create stream I0311 13:02:27.317086 6 log.go:172] (0xc0017aa8f0) (0xc00055e6e0) Stream added, broadcasting: 1 I0311 13:02:27.319010 6 log.go:172] (0xc0017aa8f0) Reply frame received for 1 I0311 13:02:27.319034 6 log.go:172] (0xc0017aa8f0) (0xc0016e8500) Create stream I0311 13:02:27.319047 6 log.go:172] (0xc0017aa8f0) (0xc0016e8500) Stream added, broadcasting: 3 I0311 13:02:27.319916 6 log.go:172] (0xc0017aa8f0) Reply frame received for 3 I0311 13:02:27.319956 6 log.go:172] (0xc0017aa8f0) (0xc0016e8640) Create stream I0311 13:02:27.319966 6 log.go:172] (0xc0017aa8f0) (0xc0016e8640) Stream added, broadcasting: 5 I0311 13:02:27.320754 6 log.go:172] (0xc0017aa8f0) Reply frame received for 5 I0311 13:02:28.377609 6 log.go:172] (0xc0017aa8f0) Data frame received for 3 I0311 13:02:28.377627 6 log.go:172] (0xc0016e8500) (3) Data frame handling I0311 13:02:28.377632 6 log.go:172] (0xc0016e8500) (3) Data frame sent I0311 13:02:28.377636 6 log.go:172] (0xc0017aa8f0) Data frame received for 3 I0311 13:02:28.377639 6 log.go:172] (0xc0016e8500) (3) Data frame handling I0311 13:02:28.377651 6 log.go:172] (0xc0017aa8f0) Data frame received for 5 I0311 13:02:28.377654 6 log.go:172] (0xc0016e8640) (5) Data frame handling I0311 13:02:28.378644 6 log.go:172] (0xc0017aa8f0) Data frame received for 1 I0311 13:02:28.378658 6 log.go:172] (0xc00055e6e0) (1) Data frame handling I0311 13:02:28.378666 6 log.go:172] (0xc00055e6e0) (1) Data frame sent I0311 13:02:28.378676 6 log.go:172] (0xc0017aa8f0) (0xc00055e6e0) Stream removed, broadcasting: 1 I0311 13:02:28.378688 6 log.go:172] (0xc0017aa8f0) Go away received I0311 13:02:28.378802 6 log.go:172] (0xc0017aa8f0) (0xc00055e6e0) Stream removed, broadcasting: 1 I0311 13:02:28.378816 6 log.go:172] (0xc0017aa8f0) (0xc0016e8500) Stream removed, broadcasting: 3 I0311 13:02:28.378825 6 log.go:172] (0xc0017aa8f0) (0xc0016e8640) Stream removed, broadcasting: 5 Mar 11 13:02:28.378: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:02:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9364" for this suite. Mar 11 13:02:50.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:02:50.465: INFO: namespace pod-network-test-9364 deletion completed in 22.083766023s • [SLOW TEST:46.482 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:02:50.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0311 13:02:56.553490 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 13:02:56.553: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:02:56.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7254" for this suite. Mar 11 13:03:02.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:03:02.643: INFO: namespace gc-7254 deletion completed in 6.088274377s • [SLOW TEST:12.178 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:03:02.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:03:02.704: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c" in namespace "downward-api-8080" to be "success or failure" Mar 11 13:03:02.720: INFO: Pod "downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288943ms Mar 11 13:03:04.724: INFO: Pod "downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020353437s STEP: Saw pod success Mar 11 13:03:04.724: INFO: Pod "downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c" satisfied condition "success or failure" Mar 11 13:03:04.727: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c container client-container: STEP: delete the pod Mar 11 13:03:04.763: INFO: Waiting for pod downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c to disappear Mar 11 13:03:04.767: INFO: Pod downwardapi-volume-7a01de89-e690-4718-b6eb-a1576153060c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:03:04.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8080" for this suite. Mar 11 13:03:10.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:03:10.863: INFO: namespace downward-api-8080 deletion completed in 6.092650001s • [SLOW TEST:8.218 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:03:10.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:03:10.977: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 11 13:03:15.982: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 13:03:15.983: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 11 13:03:17.987: INFO: Creating deployment "test-rollover-deployment" Mar 11 13:03:17.997: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 11 13:03:20.002: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 11 13:03:20.006: INFO: Ensure that both replica sets have 1 created replica Mar 11 13:03:20.009: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 11 13:03:20.013: INFO: Updating deployment test-rollover-deployment Mar 11 13:03:20.013: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 11 13:03:22.019: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 11 13:03:22.025: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 11 13:03:22.029: INFO: all replica sets need to contain the pod-template-hash label Mar 11 13:03:22.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528601, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 13:03:24.038: INFO: all replica sets need to contain the pod-template-hash label Mar 11 13:03:24.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528601, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 13:03:26.037: INFO: all replica sets need to contain the pod-template-hash label Mar 11 13:03:26.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528601, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 13:03:28.037: INFO: all replica sets need to contain the pod-template-hash label Mar 11 13:03:28.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528601, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 13:03:30.035: INFO: all replica sets need to contain the pod-template-hash label Mar 11 13:03:30.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528601, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719528598, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 13:03:32.058: INFO: Mar 11 13:03:32.058: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 11 13:03:32.064: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7209,SelfLink:/apis/apps/v1/namespaces/deployment-7209/deployments/test-rollover-deployment,UID:f3c479fb-ed89-416d-b64b-212a1538eb50,ResourceVersion:541782,Generation:2,CreationTimestamp:2020-03-11 13:03:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-11 13:03:18 +0000 UTC 2020-03-11 13:03:18 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-11 13:03:31 +0000 UTC 2020-03-11 13:03:18 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 11 13:03:32.066: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7209,SelfLink:/apis/apps/v1/namespaces/deployment-7209/replicasets/test-rollover-deployment-854595fc44,UID:8d71232d-a842-44d6-9b1e-175f9e8febd7,ResourceVersion:541771,Generation:2,CreationTimestamp:2020-03-11 13:03:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3c479fb-ed89-416d-b64b-212a1538eb50 0xc002a4c327 0xc002a4c328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 13:03:32.066: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 11 13:03:32.066: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7209,SelfLink:/apis/apps/v1/namespaces/deployment-7209/replicasets/test-rollover-controller,UID:bb4dda92-fedb-4d81-9ecf-79cb79c6514a,ResourceVersion:541780,Generation:2,CreationTimestamp:2020-03-11 13:03:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3c479fb-ed89-416d-b64b-212a1538eb50 0xc002a4c257 0xc002a4c258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:03:32.066: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7209,SelfLink:/apis/apps/v1/namespaces/deployment-7209/replicasets/test-rollover-deployment-9b8b997cf,UID:a327e3fa-cd51-4a15-9b90-4bbdc4373433,ResourceVersion:541741,Generation:2,CreationTimestamp:2020-03-11 13:03:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3c479fb-ed89-416d-b64b-212a1538eb50 0xc002a4c3f0 0xc002a4c3f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:03:32.069: INFO: Pod "test-rollover-deployment-854595fc44-7xwwz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-7xwwz,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7209,SelfLink:/api/v1/namespaces/deployment-7209/pods/test-rollover-deployment-854595fc44-7xwwz,UID:ce57ac57-2993-4339-9747-832b0ef66ebc,ResourceVersion:541749,Generation:0,CreationTimestamp:2020-03-11 13:03:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 8d71232d-a842-44d6-9b1e-175f9e8febd7 0xc0029a98f7 0xc0029a98f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d86cf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d86cf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-d86cf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0029a9970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0029a9990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:03:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:03:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:03:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:03:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.205,StartTime:2020-03-11 13:03:20 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-11 13:03:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://dfc7e76d24722b859af9890575cad4568c362d54f85998a2c11b70820d180881}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:03:32.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7209" for this suite. Mar 11 13:03:38.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:03:38.155: INFO: namespace deployment-7209 deletion completed in 6.083859422s • [SLOW TEST:27.291 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:03:38.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-c425078d-4623-4970-9fe0-fdce87cdc5f4 STEP: Creating configMap with name cm-test-opt-upd-6899d77c-554a-4bc7-bc6f-0927bc4f42dd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c425078d-4623-4970-9fe0-fdce87cdc5f4 STEP: Updating configmap cm-test-opt-upd-6899d77c-554a-4bc7-bc6f-0927bc4f42dd STEP: Creating configMap with name cm-test-opt-create-aecd4609-0429-47ee-9f5b-db26c21f7b3e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:05:14.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9906" for this suite. Mar 11 13:05:36.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:05:36.818: INFO: namespace configmap-9906 deletion completed in 22.103829366s • [SLOW TEST:118.663 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:05:36.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9c4d9d0c-c3fe-4ff8-8043-0b57c0dd91f4 STEP: Creating a pod to test consume secrets Mar 11 13:05:36.888: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a" in namespace "projected-2959" to be "success or failure" Mar 11 13:05:36.896: INFO: Pod "pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291965ms Mar 11 13:05:38.901: INFO: Pod "pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013029566s Mar 11 13:05:40.905: INFO: Pod "pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017525553s STEP: Saw pod success Mar 11 13:05:40.905: INFO: Pod "pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a" satisfied condition "success or failure" Mar 11 13:05:40.908: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a container projected-secret-volume-test: STEP: delete the pod Mar 11 13:05:40.939: INFO: Waiting for pod pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a to disappear Mar 11 13:05:40.944: INFO: Pod pod-projected-secrets-2471b258-d665-43ae-aef3-3df5083a8a1a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:05:40.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2959" for this suite. Mar 11 13:05:46.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:05:47.044: INFO: namespace projected-2959 deletion completed in 6.096567137s • [SLOW TEST:10.226 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:05:47.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 11 13:05:47.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2822' Mar 11 13:05:47.413: INFO: stderr: "" Mar 11 13:05:47.413: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 13:05:47.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:05:47.531: INFO: stderr: "" Mar 11 13:05:47.531: INFO: stdout: "update-demo-nautilus-2ggvw update-demo-nautilus-d2hh2 " Mar 11 13:05:47.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:47.609: INFO: stderr: "" Mar 11 13:05:47.609: INFO: stdout: "" Mar 11 13:05:47.609: INFO: update-demo-nautilus-2ggvw is created but not running Mar 11 13:05:52.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:05:52.734: INFO: stderr: "" Mar 11 13:05:52.734: INFO: stdout: "update-demo-nautilus-2ggvw update-demo-nautilus-d2hh2 " Mar 11 13:05:52.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:52.835: INFO: stderr: "" Mar 11 13:05:52.835: INFO: stdout: "true" Mar 11 13:05:52.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:52.904: INFO: stderr: "" Mar 11 13:05:52.904: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:05:52.904: INFO: validating pod update-demo-nautilus-2ggvw Mar 11 13:05:52.907: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:05:52.907: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:05:52.907: INFO: update-demo-nautilus-2ggvw is verified up and running Mar 11 13:05:52.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2hh2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:52.973: INFO: stderr: "" Mar 11 13:05:52.973: INFO: stdout: "true" Mar 11 13:05:52.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d2hh2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:53.036: INFO: stderr: "" Mar 11 13:05:53.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:05:53.036: INFO: validating pod update-demo-nautilus-d2hh2 Mar 11 13:05:53.038: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:05:53.038: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:05:53.038: INFO: update-demo-nautilus-d2hh2 is verified up and running STEP: scaling down the replication controller Mar 11 13:05:53.040: INFO: scanned /root for discovery docs: Mar 11 13:05:53.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2822' Mar 11 13:05:54.124: INFO: stderr: "" Mar 11 13:05:54.124: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 13:05:54.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:05:54.202: INFO: stderr: "" Mar 11 13:05:54.202: INFO: stdout: "update-demo-nautilus-2ggvw update-demo-nautilus-d2hh2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 11 13:05:59.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:05:59.323: INFO: stderr: "" Mar 11 13:05:59.323: INFO: stdout: "update-demo-nautilus-2ggvw " Mar 11 13:05:59.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:59.393: INFO: stderr: "" Mar 11 13:05:59.393: INFO: stdout: "true" Mar 11 13:05:59.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:05:59.478: INFO: stderr: "" Mar 11 13:05:59.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:05:59.478: INFO: validating pod update-demo-nautilus-2ggvw Mar 11 13:05:59.481: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:05:59.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:05:59.481: INFO: update-demo-nautilus-2ggvw is verified up and running STEP: scaling up the replication controller Mar 11 13:05:59.483: INFO: scanned /root for discovery docs: Mar 11 13:05:59.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2822' Mar 11 13:06:00.577: INFO: stderr: "" Mar 11 13:06:00.577: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 13:06:00.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:06:00.665: INFO: stderr: "" Mar 11 13:06:00.665: INFO: stdout: "update-demo-nautilus-2ggvw update-demo-nautilus-5g5t9 " Mar 11 13:06:00.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:00.741: INFO: stderr: "" Mar 11 13:06:00.741: INFO: stdout: "true" Mar 11 13:06:00.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:00.823: INFO: stderr: "" Mar 11 13:06:00.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:06:00.823: INFO: validating pod update-demo-nautilus-2ggvw Mar 11 13:06:00.826: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:06:00.826: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:06:00.826: INFO: update-demo-nautilus-2ggvw is verified up and running Mar 11 13:06:00.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g5t9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:00.897: INFO: stderr: "" Mar 11 13:06:00.897: INFO: stdout: "" Mar 11 13:06:00.897: INFO: update-demo-nautilus-5g5t9 is created but not running Mar 11 13:06:05.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2822' Mar 11 13:06:06.006: INFO: stderr: "" Mar 11 13:06:06.006: INFO: stdout: "update-demo-nautilus-2ggvw update-demo-nautilus-5g5t9 " Mar 11 13:06:06.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:06.112: INFO: stderr: "" Mar 11 13:06:06.112: INFO: stdout: "true" Mar 11 13:06:06.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2ggvw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:06.196: INFO: stderr: "" Mar 11 13:06:06.197: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:06:06.197: INFO: validating pod update-demo-nautilus-2ggvw Mar 11 13:06:06.199: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:06:06.199: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:06:06.199: INFO: update-demo-nautilus-2ggvw is verified up and running Mar 11 13:06:06.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g5t9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:06.311: INFO: stderr: "" Mar 11 13:06:06.311: INFO: stdout: "true" Mar 11 13:06:06.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5g5t9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2822' Mar 11 13:06:06.381: INFO: stderr: "" Mar 11 13:06:06.381: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:06:06.381: INFO: validating pod update-demo-nautilus-5g5t9 Mar 11 13:06:06.384: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:06:06.384: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:06:06.384: INFO: update-demo-nautilus-5g5t9 is verified up and running STEP: using delete to clean up resources Mar 11 13:06:06.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2822' Mar 11 13:06:06.451: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:06:06.451: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 11 13:06:06.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2822' Mar 11 13:06:06.534: INFO: stderr: "No resources found.\n" Mar 11 13:06:06.534: INFO: stdout: "" Mar 11 13:06:06.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2822 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 13:06:06.601: INFO: stderr: "" Mar 11 13:06:06.601: INFO: stdout: "update-demo-nautilus-2ggvw\nupdate-demo-nautilus-5g5t9\n" Mar 11 13:06:07.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2822' Mar 11 13:06:07.194: INFO: stderr: "No resources found.\n" Mar 11 13:06:07.194: INFO: stdout: "" Mar 11 13:06:07.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2822 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 13:06:07.262: INFO: stderr: "" Mar 11 13:06:07.262: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:06:07.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2822" for this suite. Mar 11 13:06:13.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:06:13.357: INFO: namespace kubectl-2822 deletion completed in 6.092453064s • [SLOW TEST:26.313 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:06:13.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 11 13:06:15.423: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:06:15.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7760" for this suite. Mar 11 13:06:21.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:06:21.550: INFO: namespace container-runtime-7760 deletion completed in 6.096383141s • [SLOW TEST:8.192 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:06:21.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:06:21.616: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 11 13:06:26.620: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 13:06:26.620: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 11 13:06:26.641: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8671,SelfLink:/apis/apps/v1/namespaces/deployment-8671/deployments/test-cleanup-deployment,UID:68e12292-8a79-42d7-8a66-e7f0e179c3aa,ResourceVersion:542351,Generation:1,CreationTimestamp:2020-03-11 13:06:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 11 13:06:26.647: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8671,SelfLink:/apis/apps/v1/namespaces/deployment-8671/replicasets/test-cleanup-deployment-55bbcbc84c,UID:5aed47b8-3bed-485c-bc66-943736370886,ResourceVersion:542353,Generation:1,CreationTimestamp:2020-03-11 13:06:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 68e12292-8a79-42d7-8a66-e7f0e179c3aa 0xc002b6e217 0xc002b6e218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:06:26.647: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 11 13:06:26.648: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8671,SelfLink:/apis/apps/v1/namespaces/deployment-8671/replicasets/test-cleanup-controller,UID:719ab9fa-d1de-49bf-83f6-bff2c47b96a8,ResourceVersion:542352,Generation:1,CreationTimestamp:2020-03-11 13:06:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 68e12292-8a79-42d7-8a66-e7f0e179c3aa 0xc002b6e147 0xc002b6e148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 13:06:26.716: INFO: Pod "test-cleanup-controller-pjbxk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-pjbxk,GenerateName:test-cleanup-controller-,Namespace:deployment-8671,SelfLink:/api/v1/namespaces/deployment-8671/pods/test-cleanup-controller-pjbxk,UID:611e160b-3081-4795-88d6-12358e6362c3,ResourceVersion:542342,Generation:0,CreationTimestamp:2020-03-11 13:06:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 719ab9fa-d1de-49bf-83f6-bff2c47b96a8 0xc002b6ead7 0xc002b6ead8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b9pgm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b9pgm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-b9pgm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b6eb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b6eb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:06:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:06:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:06:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:06:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.55,StartTime:2020-03-11 13:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:06:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f504f99637df97c1724fd8aea3b483befa55156928942ad3c5e585c142f868aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:06:26.717: INFO: Pod "test-cleanup-deployment-55bbcbc84c-xkh8n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-xkh8n,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8671,SelfLink:/api/v1/namespaces/deployment-8671/pods/test-cleanup-deployment-55bbcbc84c-xkh8n,UID:cc0effd1-0a4f-44ed-8c97-4bca6c928e9d,ResourceVersion:542355,Generation:0,CreationTimestamp:2020-03-11 13:06:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 5aed47b8-3bed-485c-bc66-943736370886 0xc002b6ec47 0xc002b6ec48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-b9pgm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-b9pgm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-b9pgm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b6ecb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b6ecd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:06:26.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8671" for this suite. Mar 11 13:06:32.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:06:32.834: INFO: namespace deployment-8671 deletion completed in 6.107344098s • [SLOW TEST:11.284 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:06:32.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:06:38.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6621" for this suite. Mar 11 13:06:44.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:06:44.619: INFO: namespace watch-6621 deletion completed in 6.166446379s • [SLOW TEST:11.784 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:06:44.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 11 13:06:49.206: INFO: Successfully updated pod "labelsupdate755c1250-be3a-46c6-b130-3acf36ad822c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:06:51.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5220" for this suite. Mar 11 13:07:13.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:07:13.322: INFO: namespace projected-5220 deletion completed in 22.099372781s • [SLOW TEST:28.704 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:07:13.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b723b803-0ba4-4846-8e4a-d66bdcaf9a4d STEP: Creating a pod to test consume configMaps Mar 11 13:07:13.424: INFO: Waiting up to 5m0s for pod "pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2" in namespace "configmap-4680" to be "success or failure" Mar 11 13:07:13.451: INFO: Pod "pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2": Phase="Pending", Reason="", readiness=false. Elapsed: 27.513008ms Mar 11 13:07:15.456: INFO: Pod "pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032454709s STEP: Saw pod success Mar 11 13:07:15.456: INFO: Pod "pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2" satisfied condition "success or failure" Mar 11 13:07:15.459: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2 container configmap-volume-test: STEP: delete the pod Mar 11 13:07:15.487: INFO: Waiting for pod pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2 to disappear Mar 11 13:07:15.492: INFO: Pod pod-configmaps-a72e4a85-1a65-43c4-8348-da8dedb4dab2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:07:15.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4680" for this suite. Mar 11 13:07:21.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:07:21.613: INFO: namespace configmap-4680 deletion completed in 6.11772221s • [SLOW TEST:8.290 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:07:21.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Mar 11 13:07:21.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 11 13:07:21.800: INFO: stderr: "" Mar 11 13:07:21.800: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:07:21.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4353" for this suite. Mar 11 13:07:27.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:07:27.928: INFO: namespace kubectl-4353 deletion completed in 6.103876275s • [SLOW TEST:6.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:07:27.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 13:07:27.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1457' Mar 11 13:07:29.612: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 13:07:29.612: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Mar 11 13:07:29.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1457' Mar 11 13:07:29.782: INFO: stderr: "" Mar 11 13:07:29.782: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:07:29.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1457" for this suite. Mar 11 13:07:35.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:07:35.888: INFO: namespace kubectl-1457 deletion completed in 6.102674756s • [SLOW TEST:7.960 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:07:35.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-36b41f68-1316-4f6e-8d7d-a7634100ea1d STEP: Creating a pod to test consume secrets Mar 11 13:07:35.944: INFO: Waiting up to 5m0s for pod "pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083" in namespace "secrets-9892" to be "success or failure" Mar 11 13:07:35.947: INFO: Pod "pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083": Phase="Pending", Reason="", readiness=false. Elapsed: 3.021891ms Mar 11 13:07:37.952: INFO: Pod "pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007410801s STEP: Saw pod success Mar 11 13:07:37.952: INFO: Pod "pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083" satisfied condition "success or failure" Mar 11 13:07:37.955: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083 container secret-volume-test: STEP: delete the pod Mar 11 13:07:37.974: INFO: Waiting for pod pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083 to disappear Mar 11 13:07:37.977: INFO: Pod pod-secrets-c420aa0e-25c5-41dc-9f4b-e72079684083 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:07:37.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9892" for this suite. Mar 11 13:07:43.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:07:44.066: INFO: namespace secrets-9892 deletion completed in 6.084755945s • [SLOW TEST:8.178 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:07:44.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 11 13:07:48.197: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:48.212: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:07:50.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:50.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:07:52.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:52.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:07:54.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:54.215: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:07:56.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:56.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:07:58.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:07:58.215: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:08:00.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:08:00.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:08:02.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:08:02.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:08:04.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:08:04.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:08:06.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:08:06.216: INFO: Pod pod-with-prestop-exec-hook still exists Mar 11 13:08:08.212: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 11 13:08:08.214: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:08:08.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4720" for this suite. Mar 11 13:08:30.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:08:30.396: INFO: namespace container-lifecycle-hook-4720 deletion completed in 22.176503652s • [SLOW TEST:46.330 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:08:30.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b33e27bf-b128-42ef-8394-2ad7db14f956 STEP: Creating a pod to test consume configMaps Mar 11 13:08:30.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7" in namespace "configmap-8027" to be "success or failure" Mar 11 13:08:30.511: INFO: Pod "pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07819ms Mar 11 13:08:32.514: INFO: Pod "pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007955522s STEP: Saw pod success Mar 11 13:08:32.514: INFO: Pod "pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7" satisfied condition "success or failure" Mar 11 13:08:32.517: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7 container configmap-volume-test: STEP: delete the pod Mar 11 13:08:32.569: INFO: Waiting for pod pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7 to disappear Mar 11 13:08:32.595: INFO: Pod pod-configmaps-707f6772-c2bd-4057-bca8-8d1e8f560cf7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:08:32.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8027" for this suite. Mar 11 13:08:38.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:08:38.708: INFO: namespace configmap-8027 deletion completed in 6.109541404s • [SLOW TEST:8.312 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:08:38.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 11 13:08:38.751: INFO: Waiting up to 5m0s for pod "pod-d045365b-22ba-4fc8-a8e0-506b3385c895" in namespace "emptydir-711" to be "success or failure" Mar 11 13:08:38.802: INFO: Pod "pod-d045365b-22ba-4fc8-a8e0-506b3385c895": Phase="Pending", Reason="", readiness=false. Elapsed: 50.703189ms Mar 11 13:08:40.806: INFO: Pod "pod-d045365b-22ba-4fc8-a8e0-506b3385c895": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.054757517s STEP: Saw pod success Mar 11 13:08:40.806: INFO: Pod "pod-d045365b-22ba-4fc8-a8e0-506b3385c895" satisfied condition "success or failure" Mar 11 13:08:40.809: INFO: Trying to get logs from node iruya-worker2 pod pod-d045365b-22ba-4fc8-a8e0-506b3385c895 container test-container: STEP: delete the pod Mar 11 13:08:40.825: INFO: Waiting for pod pod-d045365b-22ba-4fc8-a8e0-506b3385c895 to disappear Mar 11 13:08:40.855: INFO: Pod pod-d045365b-22ba-4fc8-a8e0-506b3385c895 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:08:40.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-711" for this suite. Mar 11 13:08:46.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:08:46.947: INFO: namespace emptydir-711 deletion completed in 6.088072102s • [SLOW TEST:8.238 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:08:46.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:08:47.034: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67" in namespace "projected-4793" to be "success or failure" Mar 11 13:08:47.055: INFO: Pod "downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67": Phase="Pending", Reason="", readiness=false. Elapsed: 21.668301ms Mar 11 13:08:49.059: INFO: Pod "downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025683182s STEP: Saw pod success Mar 11 13:08:49.059: INFO: Pod "downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67" satisfied condition "success or failure" Mar 11 13:08:49.062: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67 container client-container: STEP: delete the pod Mar 11 13:08:49.090: INFO: Waiting for pod downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67 to disappear Mar 11 13:08:49.118: INFO: Pod downwardapi-volume-3daf0bca-ed4e-4085-85ab-185268aa2b67 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:08:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4793" for this suite. Mar 11 13:08:55.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:08:55.224: INFO: namespace projected-4793 deletion completed in 6.103857664s • [SLOW TEST:8.277 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:08:55.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-17a44230-604a-4ddc-98ba-5fb2ebc99291 STEP: Creating secret with name s-test-opt-upd-81770519-0e7a-4329-ad9d-ca98c9ed899e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-17a44230-604a-4ddc-98ba-5fb2ebc99291 STEP: Updating secret s-test-opt-upd-81770519-0e7a-4329-ad9d-ca98c9ed899e STEP: Creating secret with name s-test-opt-create-1d59edce-12b4-4de1-afa1-4799fe1703db STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:08:59.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8199" for this suite. Mar 11 13:09:21.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:09:21.514: INFO: namespace secrets-8199 deletion completed in 22.094594321s • [SLOW TEST:26.288 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:09:21.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:09:21.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca" in namespace "projected-2023" to be "success or failure" Mar 11 13:09:21.572: INFO: Pod "downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185935ms Mar 11 13:09:23.576: INFO: Pod "downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014007534s STEP: Saw pod success Mar 11 13:09:23.576: INFO: Pod "downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca" satisfied condition "success or failure" Mar 11 13:09:23.579: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca container client-container: STEP: delete the pod Mar 11 13:09:23.600: INFO: Waiting for pod downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca to disappear Mar 11 13:09:23.614: INFO: Pod downwardapi-volume-24d3b84a-07f0-4d0a-babf-035ce4613eca no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:09:23.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2023" for this suite. Mar 11 13:09:29.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:09:29.770: INFO: namespace projected-2023 deletion completed in 6.153062417s • [SLOW TEST:8.254 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:09:29.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 11 13:09:29.833: INFO: Waiting up to 5m0s for pod "pod-39966859-13b6-4fd3-8b31-fbd856d78df0" in namespace "emptydir-603" to be "success or failure" Mar 11 13:09:29.854: INFO: Pod "pod-39966859-13b6-4fd3-8b31-fbd856d78df0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.853348ms Mar 11 13:09:31.859: INFO: Pod "pod-39966859-13b6-4fd3-8b31-fbd856d78df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025163038s STEP: Saw pod success Mar 11 13:09:31.859: INFO: Pod "pod-39966859-13b6-4fd3-8b31-fbd856d78df0" satisfied condition "success or failure" Mar 11 13:09:31.862: INFO: Trying to get logs from node iruya-worker pod pod-39966859-13b6-4fd3-8b31-fbd856d78df0 container test-container: STEP: delete the pod Mar 11 13:09:31.916: INFO: Waiting for pod pod-39966859-13b6-4fd3-8b31-fbd856d78df0 to disappear Mar 11 13:09:31.919: INFO: Pod pod-39966859-13b6-4fd3-8b31-fbd856d78df0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:09:31.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-603" for this suite. Mar 11 13:09:37.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:09:38.065: INFO: namespace emptydir-603 deletion completed in 6.141353s • [SLOW TEST:8.295 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:09:38.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-a0e85710-229b-48ff-aaae-ae77ccc72cd6 STEP: Creating a pod to test consume secrets Mar 11 13:09:38.130: INFO: Waiting up to 5m0s for pod "pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb" in namespace "secrets-702" to be "success or failure" Mar 11 13:09:38.172: INFO: Pod "pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb": Phase="Pending", Reason="", readiness=false. Elapsed: 42.089582ms Mar 11 13:09:40.176: INFO: Pod "pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.045488652s STEP: Saw pod success Mar 11 13:09:40.176: INFO: Pod "pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb" satisfied condition "success or failure" Mar 11 13:09:40.178: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb container secret-volume-test: STEP: delete the pod Mar 11 13:09:40.209: INFO: Waiting for pod pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb to disappear Mar 11 13:09:40.213: INFO: Pod pod-secrets-7d6331a3-a7f8-47c4-9b9e-2bc96163c2bb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:09:40.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-702" for this suite. Mar 11 13:09:46.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:09:46.312: INFO: namespace secrets-702 deletion completed in 6.095227536s • [SLOW TEST:8.246 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:09:46.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-174252e3-836f-4641-b55f-4764b753184f STEP: Creating a pod to test consume secrets Mar 11 13:09:46.406: INFO: Waiting up to 5m0s for pod "pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b" in namespace "secrets-813" to be "success or failure" Mar 11 13:09:46.422: INFO: Pod "pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.81282ms Mar 11 13:09:48.425: INFO: Pod "pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01959064s STEP: Saw pod success Mar 11 13:09:48.426: INFO: Pod "pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b" satisfied condition "success or failure" Mar 11 13:09:48.428: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b container secret-volume-test: STEP: delete the pod Mar 11 13:09:48.443: INFO: Waiting for pod pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b to disappear Mar 11 13:09:48.446: INFO: Pod pod-secrets-6f6de116-2cd6-4eb2-9dd0-6b6b5d195d5b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:09:48.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-813" for this suite. Mar 11 13:09:54.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:09:54.559: INFO: namespace secrets-813 deletion completed in 6.109948168s • [SLOW TEST:8.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:09:54.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3aca7675-a18a-4607-b199-c9255686f5e2 STEP: Creating a pod to test consume configMaps Mar 11 13:09:54.643: INFO: Waiting up to 5m0s for pod "pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965" in namespace "configmap-4657" to be "success or failure" Mar 11 13:09:54.650: INFO: Pod "pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965": Phase="Pending", Reason="", readiness=false. Elapsed: 7.84748ms Mar 11 13:09:56.655: INFO: Pod "pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01212085s STEP: Saw pod success Mar 11 13:09:56.655: INFO: Pod "pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965" satisfied condition "success or failure" Mar 11 13:09:56.658: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965 container configmap-volume-test: STEP: delete the pod Mar 11 13:09:56.676: INFO: Waiting for pod pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965 to disappear Mar 11 13:09:56.686: INFO: Pod pod-configmaps-d330a7fd-3430-4383-baf0-36ffb1971965 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:09:56.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4657" for this suite. Mar 11 13:10:02.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:10:02.783: INFO: namespace configmap-4657 deletion completed in 6.093144334s • [SLOW TEST:8.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:10:02.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Mar 11 13:10:03.362: INFO: created pod pod-service-account-defaultsa Mar 11 13:10:03.362: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 11 13:10:03.369: INFO: created pod pod-service-account-mountsa Mar 11 13:10:03.369: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 11 13:10:03.388: INFO: created pod pod-service-account-nomountsa Mar 11 13:10:03.388: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 11 13:10:03.417: INFO: created pod pod-service-account-defaultsa-mountspec Mar 11 13:10:03.417: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 11 13:10:03.454: INFO: created pod pod-service-account-mountsa-mountspec Mar 11 13:10:03.454: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 11 13:10:03.464: INFO: created pod pod-service-account-nomountsa-mountspec Mar 11 13:10:03.464: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 11 13:10:03.488: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 11 13:10:03.488: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 11 13:10:03.513: INFO: created pod pod-service-account-mountsa-nomountspec Mar 11 13:10:03.513: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 11 13:10:03.544: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 11 13:10:03.545: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:10:03.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7810" for this suite. Mar 11 13:10:09.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:10:09.805: INFO: namespace svcaccounts-7810 deletion completed in 6.180489079s • [SLOW TEST:7.022 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:10:09.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d655a26d-c1a3-4693-8554-66064d5759ba STEP: Creating a pod to test consume secrets Mar 11 13:10:09.886: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975" in namespace "projected-2645" to be "success or failure" Mar 11 13:10:09.890: INFO: Pod "pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492454ms Mar 11 13:10:11.894: INFO: Pod "pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008367392s STEP: Saw pod success Mar 11 13:10:11.894: INFO: Pod "pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975" satisfied condition "success or failure" Mar 11 13:10:11.898: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975 container projected-secret-volume-test: STEP: delete the pod Mar 11 13:10:11.922: INFO: Waiting for pod pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975 to disappear Mar 11 13:10:11.947: INFO: Pod pod-projected-secrets-20b88f85-b959-48c2-915d-ec642b681975 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:10:11.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2645" for this suite. Mar 11 13:10:17.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:10:18.067: INFO: namespace projected-2645 deletion completed in 6.116513414s • [SLOW TEST:8.261 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:10:18.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d96f3ac7-d71b-42b9-bfd1-59ec2009ec53 STEP: Creating a pod to test consume secrets Mar 11 13:10:18.136: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f" in namespace "projected-2421" to be "success or failure" Mar 11 13:10:18.157: INFO: Pod "pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.900754ms Mar 11 13:10:20.161: INFO: Pod "pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024560069s STEP: Saw pod success Mar 11 13:10:20.161: INFO: Pod "pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f" satisfied condition "success or failure" Mar 11 13:10:20.163: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f container projected-secret-volume-test: STEP: delete the pod Mar 11 13:10:20.200: INFO: Waiting for pod pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f to disappear Mar 11 13:10:20.209: INFO: Pod pod-projected-secrets-cd8fd641-a3d1-4893-9732-8aa36364747f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:10:20.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2421" for this suite. Mar 11 13:10:26.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:10:26.304: INFO: namespace projected-2421 deletion completed in 6.090960636s • [SLOW TEST:8.236 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:10:26.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:10:26.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814" in namespace "downward-api-2616" to be "success or failure" Mar 11 13:10:26.420: INFO: Pod "downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814": Phase="Pending", Reason="", readiness=false. Elapsed: 5.942967ms Mar 11 13:10:28.424: INFO: Pod "downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009940586s STEP: Saw pod success Mar 11 13:10:28.424: INFO: Pod "downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814" satisfied condition "success or failure" Mar 11 13:10:28.427: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814 container client-container: STEP: delete the pod Mar 11 13:10:28.461: INFO: Waiting for pod downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814 to disappear Mar 11 13:10:28.466: INFO: Pod downwardapi-volume-701d9eb2-f6d4-4fbd-9c73-86834e935814 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:10:28.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2616" for this suite. Mar 11 13:10:34.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:10:34.569: INFO: namespace downward-api-2616 deletion completed in 6.100338699s • [SLOW TEST:8.266 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:10:34.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 11 13:10:34.633: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543549,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 13:10:34.633: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543549,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 11 13:10:44.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543569,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 11 13:10:44.640: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543569,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 11 13:10:54.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543589,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 13:10:54.647: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543589,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 11 13:11:04.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543611,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 13:11:04.652: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-a,UID:64bc59ec-520d-4096-b3b1-73c1ab230781,ResourceVersion:543611,Generation:0,CreationTimestamp:2020-03-11 13:10:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 11 13:11:14.660: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-b,UID:9646f4b9-3686-48cb-b641-30a86f64cdfc,ResourceVersion:543632,Generation:0,CreationTimestamp:2020-03-11 13:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 13:11:14.660: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-b,UID:9646f4b9-3686-48cb-b641-30a86f64cdfc,ResourceVersion:543632,Generation:0,CreationTimestamp:2020-03-11 13:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 11 13:11:24.665: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-b,UID:9646f4b9-3686-48cb-b641-30a86f64cdfc,ResourceVersion:543652,Generation:0,CreationTimestamp:2020-03-11 13:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 13:11:24.665: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6657,SelfLink:/api/v1/namespaces/watch-6657/configmaps/e2e-watch-test-configmap-b,UID:9646f4b9-3686-48cb-b641-30a86f64cdfc,ResourceVersion:543652,Generation:0,CreationTimestamp:2020-03-11 13:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:11:34.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6657" for this suite. Mar 11 13:11:40.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:11:40.763: INFO: namespace watch-6657 deletion completed in 6.094727232s • [SLOW TEST:66.194 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:11:40.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 11 13:11:40.835: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3503,SelfLink:/api/v1/namespaces/watch-3503/configmaps/e2e-watch-test-watch-closed,UID:4026350b-f573-4dd9-b430-796f11b3cc11,ResourceVersion:543692,Generation:0,CreationTimestamp:2020-03-11 13:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 13:11:40.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3503,SelfLink:/api/v1/namespaces/watch-3503/configmaps/e2e-watch-test-watch-closed,UID:4026350b-f573-4dd9-b430-796f11b3cc11,ResourceVersion:543693,Generation:0,CreationTimestamp:2020-03-11 13:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 11 13:11:40.847: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3503,SelfLink:/api/v1/namespaces/watch-3503/configmaps/e2e-watch-test-watch-closed,UID:4026350b-f573-4dd9-b430-796f11b3cc11,ResourceVersion:543694,Generation:0,CreationTimestamp:2020-03-11 13:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 13:11:40.848: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3503,SelfLink:/api/v1/namespaces/watch-3503/configmaps/e2e-watch-test-watch-closed,UID:4026350b-f573-4dd9-b430-796f11b3cc11,ResourceVersion:543695,Generation:0,CreationTimestamp:2020-03-11 13:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:11:40.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3503" for this suite. Mar 11 13:11:46.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:11:46.940: INFO: namespace watch-3503 deletion completed in 6.088693615s • [SLOW TEST:6.177 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:11:46.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 11 13:11:49.046: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:11:49.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2829" for this suite. Mar 11 13:11:55.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:11:55.144: INFO: namespace container-runtime-2829 deletion completed in 6.078392487s • [SLOW TEST:8.204 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:11:55.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-f772cf8e-6aae-4396-9662-81b44150b736 in namespace container-probe-453 Mar 11 13:11:57.201: INFO: Started pod busybox-f772cf8e-6aae-4396-9662-81b44150b736 in namespace container-probe-453 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 13:11:57.204: INFO: Initial restart count of pod busybox-f772cf8e-6aae-4396-9662-81b44150b736 is 0 Mar 11 13:12:47.299: INFO: Restart count of pod container-probe-453/busybox-f772cf8e-6aae-4396-9662-81b44150b736 is now 1 (50.095698744s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:12:47.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-453" for this suite. Mar 11 13:12:53.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:12:53.420: INFO: namespace container-probe-453 deletion completed in 6.106327224s • [SLOW TEST:58.276 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:12:53.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:12:53.481: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:12:55.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1423" for this suite. Mar 11 13:13:33.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:13:33.624: INFO: namespace pods-1423 deletion completed in 38.086832629s • [SLOW TEST:40.204 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:13:33.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-6aeb4cba-e12b-493c-9447-1a653cb0ac1c in namespace container-probe-1111 Mar 11 13:13:35.701: INFO: Started pod liveness-6aeb4cba-e12b-493c-9447-1a653cb0ac1c in namespace container-probe-1111 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 13:13:35.704: INFO: Initial restart count of pod liveness-6aeb4cba-e12b-493c-9447-1a653cb0ac1c is 0 Mar 11 13:13:53.744: INFO: Restart count of pod container-probe-1111/liveness-6aeb4cba-e12b-493c-9447-1a653cb0ac1c is now 1 (18.039892075s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:13:53.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1111" for this suite. Mar 11 13:13:59.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:13:59.880: INFO: namespace container-probe-1111 deletion completed in 6.115405516s • [SLOW TEST:26.256 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:13:59.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Mar 11 13:13:59.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2230 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 11 13:14:01.739: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0311 13:14:01.694823 996 log.go:172] (0xc0008460b0) (0xc0006d8140) Create stream\nI0311 13:14:01.694878 996 log.go:172] (0xc0008460b0) (0xc0006d8140) Stream added, broadcasting: 1\nI0311 13:14:01.699554 996 log.go:172] (0xc0008460b0) Reply frame received for 1\nI0311 13:14:01.699609 996 log.go:172] (0xc0008460b0) (0xc0006d8000) Create stream\nI0311 13:14:01.699624 996 log.go:172] (0xc0008460b0) (0xc0006d8000) Stream added, broadcasting: 3\nI0311 13:14:01.700490 996 log.go:172] (0xc0008460b0) Reply frame received for 3\nI0311 13:14:01.700528 996 log.go:172] (0xc0008460b0) (0xc000344000) Create stream\nI0311 13:14:01.700540 996 log.go:172] (0xc0008460b0) (0xc000344000) Stream added, broadcasting: 5\nI0311 13:14:01.701509 996 log.go:172] (0xc0008460b0) Reply frame received for 5\nI0311 13:14:01.701557 996 log.go:172] (0xc0008460b0) (0xc00034a000) Create stream\nI0311 13:14:01.701568 996 log.go:172] (0xc0008460b0) (0xc00034a000) Stream added, broadcasting: 7\nI0311 13:14:01.702569 996 log.go:172] (0xc0008460b0) Reply frame received for 7\nI0311 13:14:01.702682 996 log.go:172] (0xc0006d8000) (3) Writing data frame\nI0311 13:14:01.702789 996 log.go:172] (0xc0006d8000) (3) Writing data frame\nI0311 13:14:01.703694 996 log.go:172] (0xc0008460b0) Data frame received for 5\nI0311 13:14:01.703716 996 log.go:172] (0xc000344000) (5) Data frame handling\nI0311 13:14:01.703729 996 log.go:172] (0xc000344000) (5) Data frame sent\nI0311 13:14:01.704375 996 log.go:172] (0xc0008460b0) Data frame received for 5\nI0311 13:14:01.704396 996 log.go:172] (0xc000344000) (5) Data frame handling\nI0311 13:14:01.704415 996 log.go:172] (0xc000344000) (5) Data frame sent\nI0311 13:14:01.720543 996 log.go:172] (0xc0008460b0) Data frame received for 7\nI0311 13:14:01.720623 996 log.go:172] (0xc00034a000) (7) Data frame handling\nI0311 13:14:01.720688 996 log.go:172] (0xc0008460b0) Data frame received for 5\nI0311 13:14:01.720733 996 log.go:172] (0xc000344000) (5) Data frame handling\nI0311 13:14:01.721719 996 log.go:172] (0xc0008460b0) Data frame received for 1\nI0311 13:14:01.721745 996 log.go:172] (0xc0006d8140) (1) Data frame handling\nI0311 13:14:01.721760 996 log.go:172] (0xc0006d8140) (1) Data frame sent\nI0311 13:14:01.721779 996 log.go:172] (0xc0008460b0) (0xc0006d8000) Stream removed, broadcasting: 3\nI0311 13:14:01.721844 996 log.go:172] (0xc0008460b0) (0xc0006d8140) Stream removed, broadcasting: 1\nI0311 13:14:01.721877 996 log.go:172] (0xc0008460b0) Go away received\nI0311 13:14:01.721966 996 log.go:172] (0xc0008460b0) (0xc0006d8140) Stream removed, broadcasting: 1\nI0311 13:14:01.721990 996 log.go:172] (0xc0008460b0) (0xc0006d8000) Stream removed, broadcasting: 3\nI0311 13:14:01.722003 996 log.go:172] (0xc0008460b0) (0xc000344000) Stream removed, broadcasting: 5\nI0311 13:14:01.722019 996 log.go:172] (0xc0008460b0) (0xc00034a000) Stream removed, broadcasting: 7\n" Mar 11 13:14:01.739: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:14:03.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2230" for this suite. Mar 11 13:14:15.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:14:15.861: INFO: namespace kubectl-2230 deletion completed in 12.112150215s • [SLOW TEST:15.981 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:14:15.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:14:15.939: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 11 13:14:15.949: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 11 13:14:20.954: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 11 13:14:20.954: INFO: Creating deployment "test-rolling-update-deployment" Mar 11 13:14:20.958: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 11 13:14:20.965: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 11 13:14:22.972: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 11 13:14:22.974: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 11 13:14:22.983: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6397,SelfLink:/apis/apps/v1/namespaces/deployment-6397/deployments/test-rolling-update-deployment,UID:a544ff44-ecfa-4add-b2f0-f772d07e2f55,ResourceVersion:544205,Generation:1,CreationTimestamp:2020-03-11 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-11 13:14:21 +0000 UTC 2020-03-11 13:14:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-11 13:14:22 +0000 UTC 2020-03-11 13:14:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 11 13:14:22.986: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6397,SelfLink:/apis/apps/v1/namespaces/deployment-6397/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:d7ddd60e-65c7-4bd2-ab1e-a5687b885aec,ResourceVersion:544194,Generation:1,CreationTimestamp:2020-03-11 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a544ff44-ecfa-4add-b2f0-f772d07e2f55 0xc0027fa2d7 0xc0027fa2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 11 13:14:22.986: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 11 13:14:22.986: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6397,SelfLink:/apis/apps/v1/namespaces/deployment-6397/replicasets/test-rolling-update-controller,UID:2f3ed4b6-b5ff-473d-9982-1f0457c0bdbd,ResourceVersion:544203,Generation:2,CreationTimestamp:2020-03-11 13:14:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a544ff44-ecfa-4add-b2f0-f772d07e2f55 0xc0027fa207 0xc0027fa208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:14:22.989: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-vhx8c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-vhx8c,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6397,SelfLink:/api/v1/namespaces/deployment-6397/pods/test-rolling-update-deployment-79f6b9d75c-vhx8c,UID:409ab3dc-fc5e-4eb5-b763-cf432d8f39c1,ResourceVersion:544193,Generation:0,CreationTimestamp:2020-03-11 13:14:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c d7ddd60e-65c7-4bd2-ab1e-a5687b885aec 0xc00203fcc7 0xc00203fcc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rkgmc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rkgmc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rkgmc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00203fd40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00203fd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:14:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:14:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:14:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:14:20 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.225,StartTime:2020-03-11 13:14:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-11 13:14:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://67d0d178d828995361d1c1cb4f4d6008fd6bb9391b1029c59abedaf68ed6781a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:14:22.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6397" for this suite. Mar 11 13:14:29.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:14:29.063: INFO: namespace deployment-6397 deletion completed in 6.071641846s • [SLOW TEST:13.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:14:29.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 13:14:29.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4029' Mar 11 13:14:29.211: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 13:14:29.211: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Mar 11 13:14:31.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4029' Mar 11 13:14:31.381: INFO: stderr: "" Mar 11 13:14:31.381: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:14:31.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4029" for this suite. Mar 11 13:14:37.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:14:37.474: INFO: namespace kubectl-4029 deletion completed in 6.089793513s • [SLOW TEST:8.411 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:14:37.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:14:37.556: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 11 13:14:37.564: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:37.594: INFO: Number of nodes with available pods: 0 Mar 11 13:14:37.595: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:14:38.615: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:38.617: INFO: Number of nodes with available pods: 0 Mar 11 13:14:38.617: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:14:39.599: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:39.603: INFO: Number of nodes with available pods: 1 Mar 11 13:14:39.603: INFO: Node iruya-worker2 is running more than one daemon pod Mar 11 13:14:40.599: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:40.602: INFO: Number of nodes with available pods: 2 Mar 11 13:14:40.602: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 11 13:14:40.627: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:40.627: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:40.649: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:41.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:41.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:41.655: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:42.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:42.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:42.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:42.656: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:43.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:43.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:43.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:43.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:44.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:44.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:44.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:44.656: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:45.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:45.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:45.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:45.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:46.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:46.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:46.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:46.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:47.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:47.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:47.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:47.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:48.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:48.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:48.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:48.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:49.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:49.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:49.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:49.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:50.653: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:50.653: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:50.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:50.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:51.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:51.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:51.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:51.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:52.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:52.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:52.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:52.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:53.654: INFO: Wrong image for pod: daemon-set-b4fgl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:53.654: INFO: Pod daemon-set-b4fgl is not available Mar 11 13:14:53.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:53.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:54.653: INFO: Pod daemon-set-2vth8 is not available Mar 11 13:14:54.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:54.656: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:55.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:55.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:56.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:56.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:57.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:57.654: INFO: Pod daemon-set-zl45k is not available Mar 11 13:14:57.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:58.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:58.653: INFO: Pod daemon-set-zl45k is not available Mar 11 13:14:58.656: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:14:59.652: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:14:59.652: INFO: Pod daemon-set-zl45k is not available Mar 11 13:14:59.654: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:00.653: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:15:00.653: INFO: Pod daemon-set-zl45k is not available Mar 11 13:15:00.657: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:01.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:15:01.654: INFO: Pod daemon-set-zl45k is not available Mar 11 13:15:01.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:02.711: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:15:02.711: INFO: Pod daemon-set-zl45k is not available Mar 11 13:15:02.715: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:03.654: INFO: Wrong image for pod: daemon-set-zl45k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 11 13:15:03.654: INFO: Pod daemon-set-zl45k is not available Mar 11 13:15:03.658: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:04.653: INFO: Pod daemon-set-xkgs2 is not available Mar 11 13:15:04.656: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 11 13:15:04.660: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:04.663: INFO: Number of nodes with available pods: 1 Mar 11 13:15:04.663: INFO: Node iruya-worker2 is running more than one daemon pod Mar 11 13:15:05.668: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:05.671: INFO: Number of nodes with available pods: 1 Mar 11 13:15:05.671: INFO: Node iruya-worker2 is running more than one daemon pod Mar 11 13:15:06.666: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:15:06.692: INFO: Number of nodes with available pods: 2 Mar 11 13:15:06.692: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-627, will wait for the garbage collector to delete the pods Mar 11 13:15:06.761: INFO: Deleting DaemonSet.extensions daemon-set took: 5.582402ms Mar 11 13:15:07.062: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249221ms Mar 11 13:15:14.565: INFO: Number of nodes with available pods: 0 Mar 11 13:15:14.565: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 13:15:14.568: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-627/daemonsets","resourceVersion":"544452"},"items":null} Mar 11 13:15:14.571: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-627/pods","resourceVersion":"544452"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:15:14.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-627" for this suite. Mar 11 13:15:20.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:15:20.675: INFO: namespace daemonsets-627 deletion completed in 6.090969734s • [SLOW TEST:43.201 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:15:20.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-65cbca8c-e8e1-48cb-a55b-0cf84f776cd1 STEP: Creating a pod to test consume configMaps Mar 11 13:15:20.765: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12" in namespace "projected-7988" to be "success or failure" Mar 11 13:15:20.772: INFO: Pod "pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532451ms Mar 11 13:15:22.775: INFO: Pod "pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010005703s Mar 11 13:15:24.779: INFO: Pod "pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01391215s STEP: Saw pod success Mar 11 13:15:24.779: INFO: Pod "pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12" satisfied condition "success or failure" Mar 11 13:15:24.782: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12 container projected-configmap-volume-test: STEP: delete the pod Mar 11 13:15:24.826: INFO: Waiting for pod pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12 to disappear Mar 11 13:15:24.838: INFO: Pod pod-projected-configmaps-31517deb-dbba-42e5-be06-d45ed2896a12 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:15:24.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7988" for this suite. Mar 11 13:15:30.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:15:30.929: INFO: namespace projected-7988 deletion completed in 6.087818055s • [SLOW TEST:10.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:15:30.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 11 13:15:30.981: INFO: Waiting up to 5m0s for pod "downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317" in namespace "downward-api-6545" to be "success or failure" Mar 11 13:15:30.998: INFO: Pod "downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317": Phase="Pending", Reason="", readiness=false. Elapsed: 16.991844ms Mar 11 13:15:33.001: INFO: Pod "downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020584849s STEP: Saw pod success Mar 11 13:15:33.001: INFO: Pod "downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317" satisfied condition "success or failure" Mar 11 13:15:33.004: INFO: Trying to get logs from node iruya-worker pod downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317 container dapi-container: STEP: delete the pod Mar 11 13:15:33.029: INFO: Waiting for pod downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317 to disappear Mar 11 13:15:33.032: INFO: Pod downward-api-c83b2742-3335-4d6d-ac99-b5848d94d317 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:15:33.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6545" for this suite. Mar 11 13:15:39.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:15:39.133: INFO: namespace downward-api-6545 deletion completed in 6.096885863s • [SLOW TEST:8.203 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:15:39.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Mar 11 13:15:39.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5148' Mar 11 13:15:39.503: INFO: stderr: "" Mar 11 13:15:39.503: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 11 13:15:39.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5148' Mar 11 13:15:39.596: INFO: stderr: "" Mar 11 13:15:39.596: INFO: stdout: "update-demo-nautilus-jqhxz update-demo-nautilus-rnvvq " Mar 11 13:15:39.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jqhxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5148' Mar 11 13:15:39.666: INFO: stderr: "" Mar 11 13:15:39.667: INFO: stdout: "" Mar 11 13:15:39.667: INFO: update-demo-nautilus-jqhxz is created but not running Mar 11 13:15:44.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5148' Mar 11 13:15:44.784: INFO: stderr: "" Mar 11 13:15:44.784: INFO: stdout: "update-demo-nautilus-jqhxz update-demo-nautilus-rnvvq " Mar 11 13:15:44.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jqhxz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5148' Mar 11 13:15:44.871: INFO: stderr: "" Mar 11 13:15:44.871: INFO: stdout: "true" Mar 11 13:15:44.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jqhxz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5148' Mar 11 13:15:44.955: INFO: stderr: "" Mar 11 13:15:44.955: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:15:44.955: INFO: validating pod update-demo-nautilus-jqhxz Mar 11 13:15:44.958: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:15:44.958: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:15:44.958: INFO: update-demo-nautilus-jqhxz is verified up and running Mar 11 13:15:44.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rnvvq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5148' Mar 11 13:15:45.036: INFO: stderr: "" Mar 11 13:15:45.036: INFO: stdout: "true" Mar 11 13:15:45.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rnvvq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5148' Mar 11 13:15:45.105: INFO: stderr: "" Mar 11 13:15:45.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 11 13:15:45.105: INFO: validating pod update-demo-nautilus-rnvvq Mar 11 13:15:45.108: INFO: got data: { "image": "nautilus.jpg" } Mar 11 13:15:45.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 11 13:15:45.108: INFO: update-demo-nautilus-rnvvq is verified up and running STEP: using delete to clean up resources Mar 11 13:15:45.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5148' Mar 11 13:15:45.174: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:15:45.174: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 11 13:15:45.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5148' Mar 11 13:15:45.245: INFO: stderr: "No resources found.\n" Mar 11 13:15:45.245: INFO: stdout: "" Mar 11 13:15:45.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5148 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 13:15:45.318: INFO: stderr: "" Mar 11 13:15:45.318: INFO: stdout: "update-demo-nautilus-jqhxz\nupdate-demo-nautilus-rnvvq\n" Mar 11 13:15:45.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5148' Mar 11 13:15:45.918: INFO: stderr: "No resources found.\n" Mar 11 13:15:45.918: INFO: stdout: "" Mar 11 13:15:45.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5148 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 13:15:45.993: INFO: stderr: "" Mar 11 13:15:45.993: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:15:45.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5148" for this suite. Mar 11 13:16:08.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:16:08.080: INFO: namespace kubectl-5148 deletion completed in 22.084403056s • [SLOW TEST:28.947 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:16:08.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-a96fc5cd-a944-4cfe-be5b-87e148b8e02d STEP: Creating configMap with name cm-test-opt-upd-81209044-c021-4510-923f-fae5a504a6e7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a96fc5cd-a944-4cfe-be5b-87e148b8e02d STEP: Updating configmap cm-test-opt-upd-81209044-c021-4510-923f-fae5a504a6e7 STEP: Creating configMap with name cm-test-opt-create-bbf70478-42e7-48ae-b445-9db8da8e8a5d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:17:20.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-361" for this suite. Mar 11 13:17:42.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:17:42.594: INFO: namespace projected-361 deletion completed in 22.068817679s • [SLOW TEST:94.513 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:17:42.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:17:42.648: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 11 13:17:43.706: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:17:44.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-63" for this suite. Mar 11 13:17:50.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:17:50.822: INFO: namespace replication-controller-63 deletion completed in 6.093985735s • [SLOW TEST:8.228 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:17:50.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6165 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 13:17:50.854: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 13:18:13.021: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.80:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6165 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:18:13.021: INFO: >>> kubeConfig: /root/.kube/config I0311 13:18:13.057344 6 log.go:172] (0xc000c506e0) (0xc000ff43c0) Create stream I0311 13:18:13.057376 6 log.go:172] (0xc000c506e0) (0xc000ff43c0) Stream added, broadcasting: 1 I0311 13:18:13.059608 6 log.go:172] (0xc000c506e0) Reply frame received for 1 I0311 13:18:13.059658 6 log.go:172] (0xc000c506e0) (0xc00209c6e0) Create stream I0311 13:18:13.059673 6 log.go:172] (0xc000c506e0) (0xc00209c6e0) Stream added, broadcasting: 3 I0311 13:18:13.060823 6 log.go:172] (0xc000c506e0) Reply frame received for 3 I0311 13:18:13.060858 6 log.go:172] (0xc000c506e0) (0xc00209c780) Create stream I0311 13:18:13.060870 6 log.go:172] (0xc000c506e0) (0xc00209c780) Stream added, broadcasting: 5 I0311 13:18:13.062006 6 log.go:172] (0xc000c506e0) Reply frame received for 5 I0311 13:18:13.136482 6 log.go:172] (0xc000c506e0) Data frame received for 5 I0311 13:18:13.136516 6 log.go:172] (0xc00209c780) (5) Data frame handling I0311 13:18:13.136538 6 log.go:172] (0xc000c506e0) Data frame received for 3 I0311 13:18:13.136549 6 log.go:172] (0xc00209c6e0) (3) Data frame handling I0311 13:18:13.136563 6 log.go:172] (0xc00209c6e0) (3) Data frame sent I0311 13:18:13.136574 6 log.go:172] (0xc000c506e0) Data frame received for 3 I0311 13:18:13.136584 6 log.go:172] (0xc00209c6e0) (3) Data frame handling I0311 13:18:13.138457 6 log.go:172] (0xc000c506e0) Data frame received for 1 I0311 13:18:13.138479 6 log.go:172] (0xc000ff43c0) (1) Data frame handling I0311 13:18:13.138498 6 log.go:172] (0xc000ff43c0) (1) Data frame sent I0311 13:18:13.138516 6 log.go:172] (0xc000c506e0) (0xc000ff43c0) Stream removed, broadcasting: 1 I0311 13:18:13.138547 6 log.go:172] (0xc000c506e0) Go away received I0311 13:18:13.138637 6 log.go:172] (0xc000c506e0) (0xc000ff43c0) Stream removed, broadcasting: 1 I0311 13:18:13.138655 6 log.go:172] (0xc000c506e0) (0xc00209c6e0) Stream removed, broadcasting: 3 I0311 13:18:13.138667 6 log.go:172] (0xc000c506e0) (0xc00209c780) Stream removed, broadcasting: 5 Mar 11 13:18:13.138: INFO: Found all expected endpoints: [netserver-0] Mar 11 13:18:13.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.232:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6165 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:18:13.142: INFO: >>> kubeConfig: /root/.kube/config I0311 13:18:13.174816 6 log.go:172] (0xc000c51080) (0xc000ff46e0) Create stream I0311 13:18:13.174839 6 log.go:172] (0xc000c51080) (0xc000ff46e0) Stream added, broadcasting: 1 I0311 13:18:13.177162 6 log.go:172] (0xc000c51080) Reply frame received for 1 I0311 13:18:13.177190 6 log.go:172] (0xc000c51080) (0xc000ff48c0) Create stream I0311 13:18:13.177201 6 log.go:172] (0xc000c51080) (0xc000ff48c0) Stream added, broadcasting: 3 I0311 13:18:13.178132 6 log.go:172] (0xc000c51080) Reply frame received for 3 I0311 13:18:13.178170 6 log.go:172] (0xc000c51080) (0xc000ff4960) Create stream I0311 13:18:13.178180 6 log.go:172] (0xc000c51080) (0xc000ff4960) Stream added, broadcasting: 5 I0311 13:18:13.179102 6 log.go:172] (0xc000c51080) Reply frame received for 5 I0311 13:18:13.237448 6 log.go:172] (0xc000c51080) Data frame received for 3 I0311 13:18:13.237474 6 log.go:172] (0xc000ff48c0) (3) Data frame handling I0311 13:18:13.237484 6 log.go:172] (0xc000ff48c0) (3) Data frame sent I0311 13:18:13.237490 6 log.go:172] (0xc000c51080) Data frame received for 3 I0311 13:18:13.237507 6 log.go:172] (0xc000ff48c0) (3) Data frame handling I0311 13:18:13.237531 6 log.go:172] (0xc000c51080) Data frame received for 5 I0311 13:18:13.237540 6 log.go:172] (0xc000ff4960) (5) Data frame handling I0311 13:18:13.239183 6 log.go:172] (0xc000c51080) Data frame received for 1 I0311 13:18:13.239207 6 log.go:172] (0xc000ff46e0) (1) Data frame handling I0311 13:18:13.239222 6 log.go:172] (0xc000ff46e0) (1) Data frame sent I0311 13:18:13.239233 6 log.go:172] (0xc000c51080) (0xc000ff46e0) Stream removed, broadcasting: 1 I0311 13:18:13.239263 6 log.go:172] (0xc000c51080) Go away received I0311 13:18:13.239313 6 log.go:172] (0xc000c51080) (0xc000ff46e0) Stream removed, broadcasting: 1 I0311 13:18:13.239323 6 log.go:172] (0xc000c51080) (0xc000ff48c0) Stream removed, broadcasting: 3 I0311 13:18:13.239328 6 log.go:172] (0xc000c51080) (0xc000ff4960) Stream removed, broadcasting: 5 Mar 11 13:18:13.239: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:18:13.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6165" for this suite. Mar 11 13:18:35.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:18:35.333: INFO: namespace pod-network-test-6165 deletion completed in 22.089984633s • [SLOW TEST:44.511 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:18:35.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 11 13:18:36.030: INFO: Pod name wrapped-volume-race-8033a7a7-6897-46cc-b947-bc082bef71f0: Found 0 pods out of 5 Mar 11 13:18:41.051: INFO: Pod name wrapped-volume-race-8033a7a7-6897-46cc-b947-bc082bef71f0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8033a7a7-6897-46cc-b947-bc082bef71f0 in namespace emptydir-wrapper-7538, will wait for the garbage collector to delete the pods Mar 11 13:18:53.133: INFO: Deleting ReplicationController wrapped-volume-race-8033a7a7-6897-46cc-b947-bc082bef71f0 took: 7.670528ms Mar 11 13:18:53.434: INFO: Terminating ReplicationController wrapped-volume-race-8033a7a7-6897-46cc-b947-bc082bef71f0 pods took: 300.201153ms STEP: Creating RC which spawns configmap-volume pods Mar 11 13:19:34.363: INFO: Pod name wrapped-volume-race-6e9b5043-5c8d-4b21-963d-5b2ae08c0c89: Found 0 pods out of 5 Mar 11 13:19:39.370: INFO: Pod name wrapped-volume-race-6e9b5043-5c8d-4b21-963d-5b2ae08c0c89: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6e9b5043-5c8d-4b21-963d-5b2ae08c0c89 in namespace emptydir-wrapper-7538, will wait for the garbage collector to delete the pods Mar 11 13:19:49.460: INFO: Deleting ReplicationController wrapped-volume-race-6e9b5043-5c8d-4b21-963d-5b2ae08c0c89 took: 16.819828ms Mar 11 13:19:49.760: INFO: Terminating ReplicationController wrapped-volume-race-6e9b5043-5c8d-4b21-963d-5b2ae08c0c89 pods took: 300.269599ms STEP: Creating RC which spawns configmap-volume pods Mar 11 13:20:25.194: INFO: Pod name wrapped-volume-race-09d60389-4605-46ca-9474-333d47e7f0c7: Found 0 pods out of 5 Mar 11 13:20:30.201: INFO: Pod name wrapped-volume-race-09d60389-4605-46ca-9474-333d47e7f0c7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-09d60389-4605-46ca-9474-333d47e7f0c7 in namespace emptydir-wrapper-7538, will wait for the garbage collector to delete the pods Mar 11 13:20:40.344: INFO: Deleting ReplicationController wrapped-volume-race-09d60389-4605-46ca-9474-333d47e7f0c7 took: 5.303903ms Mar 11 13:20:40.644: INFO: Terminating ReplicationController wrapped-volume-race-09d60389-4605-46ca-9474-333d47e7f0c7 pods took: 300.194557ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:21:24.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7538" for this suite. Mar 11 13:21:32.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:21:33.123: INFO: namespace emptydir-wrapper-7538 deletion completed in 8.149355141s • [SLOW TEST:177.789 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:21:33.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-fe3b07b6-848f-48f1-8a63-5601e817c1ac [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:21:33.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4302" for this suite. Mar 11 13:21:39.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:21:39.261: INFO: namespace configmap-4302 deletion completed in 6.098043367s • [SLOW TEST:6.138 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:21:39.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Mar 11 13:21:39.313: INFO: Waiting up to 5m0s for pod "client-containers-ceec3d16-5759-4348-807a-d07691b87440" in namespace "containers-3266" to be "success or failure" Mar 11 13:21:39.318: INFO: Pod "client-containers-ceec3d16-5759-4348-807a-d07691b87440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613888ms Mar 11 13:21:41.330: INFO: Pod "client-containers-ceec3d16-5759-4348-807a-d07691b87440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016561551s STEP: Saw pod success Mar 11 13:21:41.330: INFO: Pod "client-containers-ceec3d16-5759-4348-807a-d07691b87440" satisfied condition "success or failure" Mar 11 13:21:41.332: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ceec3d16-5759-4348-807a-d07691b87440 container test-container: STEP: delete the pod Mar 11 13:21:41.390: INFO: Waiting for pod client-containers-ceec3d16-5759-4348-807a-d07691b87440 to disappear Mar 11 13:21:41.395: INFO: Pod client-containers-ceec3d16-5759-4348-807a-d07691b87440 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:21:41.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3266" for this suite. Mar 11 13:21:47.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:21:47.488: INFO: namespace containers-3266 deletion completed in 6.089191813s • [SLOW TEST:8.226 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:21:47.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Mar 11 13:21:47.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 11 13:21:49.155: INFO: stderr: "" Mar 11 13:21:49.155: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:21:49.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1354" for this suite. Mar 11 13:21:55.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:21:55.269: INFO: namespace kubectl-1354 deletion completed in 6.109750957s • [SLOW TEST:7.781 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:21:55.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6103 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 11 13:21:55.373: INFO: Found 0 stateful pods, waiting for 3 Mar 11 13:22:05.378: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:22:05.379: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:22:05.379: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:22:05.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:22:05.644: INFO: stderr: "I0311 13:22:05.522799 1352 log.go:172] (0xc00058c630) (0xc0005c4be0) Create stream\nI0311 13:22:05.522846 1352 log.go:172] (0xc00058c630) (0xc0005c4be0) Stream added, broadcasting: 1\nI0311 13:22:05.526228 1352 log.go:172] (0xc00058c630) Reply frame received for 1\nI0311 13:22:05.526275 1352 log.go:172] (0xc00058c630) (0xc0005c4280) Create stream\nI0311 13:22:05.526288 1352 log.go:172] (0xc00058c630) (0xc0005c4280) Stream added, broadcasting: 3\nI0311 13:22:05.527125 1352 log.go:172] (0xc00058c630) Reply frame received for 3\nI0311 13:22:05.527154 1352 log.go:172] (0xc00058c630) (0xc0005c4320) Create stream\nI0311 13:22:05.527162 1352 log.go:172] (0xc00058c630) (0xc0005c4320) Stream added, broadcasting: 5\nI0311 13:22:05.527864 1352 log.go:172] (0xc00058c630) Reply frame received for 5\nI0311 13:22:05.604115 1352 log.go:172] (0xc00058c630) Data frame received for 5\nI0311 13:22:05.604140 1352 log.go:172] (0xc0005c4320) (5) Data frame handling\nI0311 13:22:05.604159 1352 log.go:172] (0xc0005c4320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:22:05.639669 1352 log.go:172] (0xc00058c630) Data frame received for 5\nI0311 13:22:05.639710 1352 log.go:172] (0xc0005c4320) (5) Data frame handling\nI0311 13:22:05.639736 1352 log.go:172] (0xc00058c630) Data frame received for 3\nI0311 13:22:05.639743 1352 log.go:172] (0xc0005c4280) (3) Data frame handling\nI0311 13:22:05.639752 1352 log.go:172] (0xc0005c4280) (3) Data frame sent\nI0311 13:22:05.639759 1352 log.go:172] (0xc00058c630) Data frame received for 3\nI0311 13:22:05.639765 1352 log.go:172] (0xc0005c4280) (3) Data frame handling\nI0311 13:22:05.641346 1352 log.go:172] (0xc00058c630) Data frame received for 1\nI0311 13:22:05.641366 1352 log.go:172] (0xc0005c4be0) (1) Data frame handling\nI0311 13:22:05.641374 1352 log.go:172] (0xc0005c4be0) (1) Data frame sent\nI0311 13:22:05.641384 1352 log.go:172] (0xc00058c630) (0xc0005c4be0) Stream removed, broadcasting: 1\nI0311 13:22:05.641399 1352 log.go:172] (0xc00058c630) Go away received\nI0311 13:22:05.641721 1352 log.go:172] (0xc00058c630) (0xc0005c4be0) Stream removed, broadcasting: 1\nI0311 13:22:05.641740 1352 log.go:172] (0xc00058c630) (0xc0005c4280) Stream removed, broadcasting: 3\nI0311 13:22:05.641747 1352 log.go:172] (0xc00058c630) (0xc0005c4320) Stream removed, broadcasting: 5\n" Mar 11 13:22:05.644: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:22:05.644: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 11 13:22:15.674: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 11 13:22:25.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:22:25.868: INFO: stderr: "I0311 13:22:25.799760 1372 log.go:172] (0xc00013cdc0) (0xc000240820) Create stream\nI0311 13:22:25.799812 1372 log.go:172] (0xc00013cdc0) (0xc000240820) Stream added, broadcasting: 1\nI0311 13:22:25.801838 1372 log.go:172] (0xc00013cdc0) Reply frame received for 1\nI0311 13:22:25.801892 1372 log.go:172] (0xc00013cdc0) (0xc00090e000) Create stream\nI0311 13:22:25.801922 1372 log.go:172] (0xc00013cdc0) (0xc00090e000) Stream added, broadcasting: 3\nI0311 13:22:25.803292 1372 log.go:172] (0xc00013cdc0) Reply frame received for 3\nI0311 13:22:25.803334 1372 log.go:172] (0xc00013cdc0) (0xc0007aa000) Create stream\nI0311 13:22:25.803362 1372 log.go:172] (0xc00013cdc0) (0xc0007aa000) Stream added, broadcasting: 5\nI0311 13:22:25.804856 1372 log.go:172] (0xc00013cdc0) Reply frame received for 5\nI0311 13:22:25.864210 1372 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0311 13:22:25.864231 1372 log.go:172] (0xc00090e000) (3) Data frame handling\nI0311 13:22:25.864240 1372 log.go:172] (0xc00090e000) (3) Data frame sent\nI0311 13:22:25.864288 1372 log.go:172] (0xc00013cdc0) Data frame received for 5\nI0311 13:22:25.864310 1372 log.go:172] (0xc0007aa000) (5) Data frame handling\nI0311 13:22:25.864319 1372 log.go:172] (0xc0007aa000) (5) Data frame sent\nI0311 13:22:25.864325 1372 log.go:172] (0xc00013cdc0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:22:25.864340 1372 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0311 13:22:25.864370 1372 log.go:172] (0xc00090e000) (3) Data frame handling\nI0311 13:22:25.864394 1372 log.go:172] (0xc0007aa000) (5) Data frame handling\nI0311 13:22:25.865185 1372 log.go:172] (0xc00013cdc0) Data frame received for 1\nI0311 13:22:25.865205 1372 log.go:172] (0xc000240820) (1) Data frame handling\nI0311 13:22:25.865217 1372 log.go:172] (0xc000240820) (1) Data frame sent\nI0311 13:22:25.865242 1372 log.go:172] (0xc00013cdc0) (0xc000240820) Stream removed, broadcasting: 1\nI0311 13:22:25.865264 1372 log.go:172] (0xc00013cdc0) Go away received\nI0311 13:22:25.865530 1372 log.go:172] (0xc00013cdc0) (0xc000240820) Stream removed, broadcasting: 1\nI0311 13:22:25.865546 1372 log.go:172] (0xc00013cdc0) (0xc00090e000) Stream removed, broadcasting: 3\nI0311 13:22:25.865553 1372 log.go:172] (0xc00013cdc0) (0xc0007aa000) Stream removed, broadcasting: 5\n" Mar 11 13:22:25.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:22:25.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:22:35.891: INFO: Waiting for StatefulSet statefulset-6103/ss2 to complete update Mar 11 13:22:35.891: INFO: Waiting for Pod statefulset-6103/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:22:35.891: INFO: Waiting for Pod statefulset-6103/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:22:45.898: INFO: Waiting for StatefulSet statefulset-6103/ss2 to complete update Mar 11 13:22:45.898: INFO: Waiting for Pod statefulset-6103/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:22:55.899: INFO: Waiting for StatefulSet statefulset-6103/ss2 to complete update STEP: Rolling back to a previous revision Mar 11 13:23:05.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:23:06.119: INFO: stderr: "I0311 13:23:06.033590 1395 log.go:172] (0xc000116d10) (0xc0006ac780) Create stream\nI0311 13:23:06.033629 1395 log.go:172] (0xc000116d10) (0xc0006ac780) Stream added, broadcasting: 1\nI0311 13:23:06.035419 1395 log.go:172] (0xc000116d10) Reply frame received for 1\nI0311 13:23:06.035448 1395 log.go:172] (0xc000116d10) (0xc0006ac820) Create stream\nI0311 13:23:06.035460 1395 log.go:172] (0xc000116d10) (0xc0006ac820) Stream added, broadcasting: 3\nI0311 13:23:06.036155 1395 log.go:172] (0xc000116d10) Reply frame received for 3\nI0311 13:23:06.036179 1395 log.go:172] (0xc000116d10) (0xc000894000) Create stream\nI0311 13:23:06.036189 1395 log.go:172] (0xc000116d10) (0xc000894000) Stream added, broadcasting: 5\nI0311 13:23:06.037043 1395 log.go:172] (0xc000116d10) Reply frame received for 5\nI0311 13:23:06.100845 1395 log.go:172] (0xc000116d10) Data frame received for 5\nI0311 13:23:06.100866 1395 log.go:172] (0xc000894000) (5) Data frame handling\nI0311 13:23:06.100877 1395 log.go:172] (0xc000894000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:23:06.114061 1395 log.go:172] (0xc000116d10) Data frame received for 3\nI0311 13:23:06.114078 1395 log.go:172] (0xc0006ac820) (3) Data frame handling\nI0311 13:23:06.114088 1395 log.go:172] (0xc0006ac820) (3) Data frame sent\nI0311 13:23:06.114669 1395 log.go:172] (0xc000116d10) Data frame received for 5\nI0311 13:23:06.114684 1395 log.go:172] (0xc000894000) (5) Data frame handling\nI0311 13:23:06.114697 1395 log.go:172] (0xc000116d10) Data frame received for 3\nI0311 13:23:06.114707 1395 log.go:172] (0xc0006ac820) (3) Data frame handling\nI0311 13:23:06.116329 1395 log.go:172] (0xc000116d10) Data frame received for 1\nI0311 13:23:06.116361 1395 log.go:172] (0xc0006ac780) (1) Data frame handling\nI0311 13:23:06.116388 1395 log.go:172] (0xc0006ac780) (1) Data frame sent\nI0311 13:23:06.116411 1395 log.go:172] (0xc000116d10) (0xc0006ac780) Stream removed, broadcasting: 1\nI0311 13:23:06.116437 1395 log.go:172] (0xc000116d10) Go away received\nI0311 13:23:06.116730 1395 log.go:172] (0xc000116d10) (0xc0006ac780) Stream removed, broadcasting: 1\nI0311 13:23:06.116749 1395 log.go:172] (0xc000116d10) (0xc0006ac820) Stream removed, broadcasting: 3\nI0311 13:23:06.116757 1395 log.go:172] (0xc000116d10) (0xc000894000) Stream removed, broadcasting: 5\n" Mar 11 13:23:06.119: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:23:06.119: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:23:16.149: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 11 13:23:26.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6103 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:23:26.391: INFO: stderr: "I0311 13:23:26.323248 1416 log.go:172] (0xc000116dc0) (0xc0003866e0) Create stream\nI0311 13:23:26.323292 1416 log.go:172] (0xc000116dc0) (0xc0003866e0) Stream added, broadcasting: 1\nI0311 13:23:26.325611 1416 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0311 13:23:26.325643 1416 log.go:172] (0xc000116dc0) (0xc0006fe3c0) Create stream\nI0311 13:23:26.325654 1416 log.go:172] (0xc000116dc0) (0xc0006fe3c0) Stream added, broadcasting: 3\nI0311 13:23:26.326361 1416 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0311 13:23:26.326388 1416 log.go:172] (0xc000116dc0) (0xc000386000) Create stream\nI0311 13:23:26.326397 1416 log.go:172] (0xc000116dc0) (0xc000386000) Stream added, broadcasting: 5\nI0311 13:23:26.327030 1416 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0311 13:23:26.387423 1416 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:23:26.387471 1416 log.go:172] (0xc000386000) (5) Data frame handling\nI0311 13:23:26.387481 1416 log.go:172] (0xc000386000) (5) Data frame sent\nI0311 13:23:26.387488 1416 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:23:26.387494 1416 log.go:172] (0xc000386000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:23:26.387521 1416 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:23:26.387541 1416 log.go:172] (0xc0006fe3c0) (3) Data frame handling\nI0311 13:23:26.387565 1416 log.go:172] (0xc0006fe3c0) (3) Data frame sent\nI0311 13:23:26.387573 1416 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:23:26.387578 1416 log.go:172] (0xc0006fe3c0) (3) Data frame handling\nI0311 13:23:26.388652 1416 log.go:172] (0xc000116dc0) Data frame received for 1\nI0311 13:23:26.388679 1416 log.go:172] (0xc0003866e0) (1) Data frame handling\nI0311 13:23:26.388696 1416 log.go:172] (0xc0003866e0) (1) Data frame sent\nI0311 13:23:26.388711 1416 log.go:172] (0xc000116dc0) (0xc0003866e0) Stream removed, broadcasting: 1\nI0311 13:23:26.388726 1416 log.go:172] (0xc000116dc0) Go away received\nI0311 13:23:26.389012 1416 log.go:172] (0xc000116dc0) (0xc0003866e0) Stream removed, broadcasting: 1\nI0311 13:23:26.389026 1416 log.go:172] (0xc000116dc0) (0xc0006fe3c0) Stream removed, broadcasting: 3\nI0311 13:23:26.389031 1416 log.go:172] (0xc000116dc0) (0xc000386000) Stream removed, broadcasting: 5\n" Mar 11 13:23:26.391: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:23:26.391: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 11 13:23:56.409: INFO: Deleting all statefulset in ns statefulset-6103 Mar 11 13:23:56.411: INFO: Scaling statefulset ss2 to 0 Mar 11 13:24:06.426: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:24:06.428: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:24:06.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6103" for this suite. Mar 11 13:24:12.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:24:12.518: INFO: namespace statefulset-6103 deletion completed in 6.073690302s • [SLOW TEST:137.249 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:24:12.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7526 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7526 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7526 Mar 11 13:24:12.662: INFO: Found 0 stateful pods, waiting for 1 Mar 11 13:24:22.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 11 13:24:22.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:24:22.856: INFO: stderr: "I0311 13:24:22.772762 1436 log.go:172] (0xc0009a8580) (0xc00033eb40) Create stream\nI0311 13:24:22.772802 1436 log.go:172] (0xc0009a8580) (0xc00033eb40) Stream added, broadcasting: 1\nI0311 13:24:22.774663 1436 log.go:172] (0xc0009a8580) Reply frame received for 1\nI0311 13:24:22.774697 1436 log.go:172] (0xc0009a8580) (0xc000792000) Create stream\nI0311 13:24:22.774714 1436 log.go:172] (0xc0009a8580) (0xc000792000) Stream added, broadcasting: 3\nI0311 13:24:22.775540 1436 log.go:172] (0xc0009a8580) Reply frame received for 3\nI0311 13:24:22.775563 1436 log.go:172] (0xc0009a8580) (0xc00033ebe0) Create stream\nI0311 13:24:22.775570 1436 log.go:172] (0xc0009a8580) (0xc00033ebe0) Stream added, broadcasting: 5\nI0311 13:24:22.776456 1436 log.go:172] (0xc0009a8580) Reply frame received for 5\nI0311 13:24:22.832406 1436 log.go:172] (0xc0009a8580) Data frame received for 5\nI0311 13:24:22.832425 1436 log.go:172] (0xc00033ebe0) (5) Data frame handling\nI0311 13:24:22.832439 1436 log.go:172] (0xc00033ebe0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:24:22.850431 1436 log.go:172] (0xc0009a8580) Data frame received for 5\nI0311 13:24:22.850460 1436 log.go:172] (0xc00033ebe0) (5) Data frame handling\nI0311 13:24:22.850482 1436 log.go:172] (0xc0009a8580) Data frame received for 3\nI0311 13:24:22.850503 1436 log.go:172] (0xc000792000) (3) Data frame handling\nI0311 13:24:22.850526 1436 log.go:172] (0xc000792000) (3) Data frame sent\nI0311 13:24:22.850536 1436 log.go:172] (0xc0009a8580) Data frame received for 3\nI0311 13:24:22.850541 1436 log.go:172] (0xc000792000) (3) Data frame handling\nI0311 13:24:22.852000 1436 log.go:172] (0xc0009a8580) Data frame received for 1\nI0311 13:24:22.852019 1436 log.go:172] (0xc00033eb40) (1) Data frame handling\nI0311 13:24:22.852033 1436 log.go:172] (0xc00033eb40) (1) Data frame sent\nI0311 13:24:22.852047 1436 log.go:172] (0xc0009a8580) (0xc00033eb40) Stream removed, broadcasting: 1\nI0311 13:24:22.852062 1436 log.go:172] (0xc0009a8580) Go away received\nI0311 13:24:22.853262 1436 log.go:172] (0xc0009a8580) (0xc00033eb40) Stream removed, broadcasting: 1\nI0311 13:24:22.853287 1436 log.go:172] (0xc0009a8580) (0xc000792000) Stream removed, broadcasting: 3\nI0311 13:24:22.853313 1436 log.go:172] (0xc0009a8580) (0xc00033ebe0) Stream removed, broadcasting: 5\n" Mar 11 13:24:22.856: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:24:22.856: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:24:22.861: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 11 13:24:32.866: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:24:32.866: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:24:32.878: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:24:32.878: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:24:32.878: INFO: Mar 11 13:24:32.878: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 11 13:24:33.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996955035s Mar 11 13:24:34.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993267179s Mar 11 13:24:35.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989103535s Mar 11 13:24:36.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984444096s Mar 11 13:24:37.905: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979633511s Mar 11 13:24:38.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970118225s Mar 11 13:24:39.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96502402s Mar 11 13:24:40.919: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.96086468s Mar 11 13:24:41.924: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.728177ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7526 Mar 11 13:24:42.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:24:43.128: INFO: stderr: "I0311 13:24:43.064818 1456 log.go:172] (0xc00011edc0) (0xc00036e6e0) Create stream\nI0311 13:24:43.064868 1456 log.go:172] (0xc00011edc0) (0xc00036e6e0) Stream added, broadcasting: 1\nI0311 13:24:43.066708 1456 log.go:172] (0xc00011edc0) Reply frame received for 1\nI0311 13:24:43.066753 1456 log.go:172] (0xc00011edc0) (0xc0008b4000) Create stream\nI0311 13:24:43.066772 1456 log.go:172] (0xc00011edc0) (0xc0008b4000) Stream added, broadcasting: 3\nI0311 13:24:43.067596 1456 log.go:172] (0xc00011edc0) Reply frame received for 3\nI0311 13:24:43.067640 1456 log.go:172] (0xc00011edc0) (0xc00036e780) Create stream\nI0311 13:24:43.067655 1456 log.go:172] (0xc00011edc0) (0xc00036e780) Stream added, broadcasting: 5\nI0311 13:24:43.068516 1456 log.go:172] (0xc00011edc0) Reply frame received for 5\nI0311 13:24:43.124558 1456 log.go:172] (0xc00011edc0) Data frame received for 5\nI0311 13:24:43.124599 1456 log.go:172] (0xc00036e780) (5) Data frame handling\nI0311 13:24:43.124611 1456 log.go:172] (0xc00036e780) (5) Data frame sent\nI0311 13:24:43.124620 1456 log.go:172] (0xc00011edc0) Data frame received for 5\nI0311 13:24:43.124627 1456 log.go:172] (0xc00036e780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:24:43.124652 1456 log.go:172] (0xc00011edc0) Data frame received for 3\nI0311 13:24:43.124691 1456 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0311 13:24:43.124707 1456 log.go:172] (0xc0008b4000) (3) Data frame sent\nI0311 13:24:43.124716 1456 log.go:172] (0xc00011edc0) Data frame received for 3\nI0311 13:24:43.124722 1456 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0311 13:24:43.125700 1456 log.go:172] (0xc00011edc0) Data frame received for 1\nI0311 13:24:43.125727 1456 log.go:172] (0xc00036e6e0) (1) Data frame handling\nI0311 13:24:43.125745 1456 log.go:172] (0xc00036e6e0) (1) Data frame sent\nI0311 13:24:43.125771 1456 log.go:172] (0xc00011edc0) (0xc00036e6e0) Stream removed, broadcasting: 1\nI0311 13:24:43.125798 1456 log.go:172] (0xc00011edc0) Go away received\nI0311 13:24:43.126041 1456 log.go:172] (0xc00011edc0) (0xc00036e6e0) Stream removed, broadcasting: 1\nI0311 13:24:43.126057 1456 log.go:172] (0xc00011edc0) (0xc0008b4000) Stream removed, broadcasting: 3\nI0311 13:24:43.126063 1456 log.go:172] (0xc00011edc0) (0xc00036e780) Stream removed, broadcasting: 5\n" Mar 11 13:24:43.129: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:24:43.129: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:24:43.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:24:43.310: INFO: stderr: "I0311 13:24:43.232815 1475 log.go:172] (0xc000ab60b0) (0xc000b1a6e0) Create stream\nI0311 13:24:43.232860 1475 log.go:172] (0xc000ab60b0) (0xc000b1a6e0) Stream added, broadcasting: 1\nI0311 13:24:43.234860 1475 log.go:172] (0xc000ab60b0) Reply frame received for 1\nI0311 13:24:43.234895 1475 log.go:172] (0xc000ab60b0) (0xc0005a61e0) Create stream\nI0311 13:24:43.234916 1475 log.go:172] (0xc000ab60b0) (0xc0005a61e0) Stream added, broadcasting: 3\nI0311 13:24:43.235664 1475 log.go:172] (0xc000ab60b0) Reply frame received for 3\nI0311 13:24:43.235692 1475 log.go:172] (0xc000ab60b0) (0xc000658000) Create stream\nI0311 13:24:43.235704 1475 log.go:172] (0xc000ab60b0) (0xc000658000) Stream added, broadcasting: 5\nI0311 13:24:43.236415 1475 log.go:172] (0xc000ab60b0) Reply frame received for 5\nI0311 13:24:43.307003 1475 log.go:172] (0xc000ab60b0) Data frame received for 5\nI0311 13:24:43.307024 1475 log.go:172] (0xc000658000) (5) Data frame handling\nI0311 13:24:43.307035 1475 log.go:172] (0xc000658000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0311 13:24:43.307049 1475 log.go:172] (0xc000ab60b0) Data frame received for 3\nI0311 13:24:43.307055 1475 log.go:172] (0xc0005a61e0) (3) Data frame handling\nI0311 13:24:43.307061 1475 log.go:172] (0xc0005a61e0) (3) Data frame sent\nI0311 13:24:43.307070 1475 log.go:172] (0xc000ab60b0) Data frame received for 5\nI0311 13:24:43.307081 1475 log.go:172] (0xc000658000) (5) Data frame handling\nI0311 13:24:43.307091 1475 log.go:172] (0xc000ab60b0) Data frame received for 3\nI0311 13:24:43.307096 1475 log.go:172] (0xc0005a61e0) (3) Data frame handling\nI0311 13:24:43.307889 1475 log.go:172] (0xc000ab60b0) Data frame received for 1\nI0311 13:24:43.307908 1475 log.go:172] (0xc000b1a6e0) (1) Data frame handling\nI0311 13:24:43.307916 1475 log.go:172] (0xc000b1a6e0) (1) Data frame sent\nI0311 13:24:43.307929 1475 log.go:172] (0xc000ab60b0) (0xc000b1a6e0) Stream removed, broadcasting: 1\nI0311 13:24:43.308188 1475 log.go:172] (0xc000ab60b0) (0xc000b1a6e0) Stream removed, broadcasting: 1\nI0311 13:24:43.308200 1475 log.go:172] (0xc000ab60b0) (0xc0005a61e0) Stream removed, broadcasting: 3\nI0311 13:24:43.308279 1475 log.go:172] (0xc000ab60b0) (0xc000658000) Stream removed, broadcasting: 5\n" Mar 11 13:24:43.310: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:24:43.310: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:24:43.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:24:43.455: INFO: stderr: "I0311 13:24:43.399071 1495 log.go:172] (0xc000a4e370) (0xc00091a640) Create stream\nI0311 13:24:43.399100 1495 log.go:172] (0xc000a4e370) (0xc00091a640) Stream added, broadcasting: 1\nI0311 13:24:43.400468 1495 log.go:172] (0xc000a4e370) Reply frame received for 1\nI0311 13:24:43.400483 1495 log.go:172] (0xc000a4e370) (0xc00091a6e0) Create stream\nI0311 13:24:43.400487 1495 log.go:172] (0xc000a4e370) (0xc00091a6e0) Stream added, broadcasting: 3\nI0311 13:24:43.400948 1495 log.go:172] (0xc000a4e370) Reply frame received for 3\nI0311 13:24:43.400964 1495 log.go:172] (0xc000a4e370) (0xc000886000) Create stream\nI0311 13:24:43.400969 1495 log.go:172] (0xc000a4e370) (0xc000886000) Stream added, broadcasting: 5\nI0311 13:24:43.401568 1495 log.go:172] (0xc000a4e370) Reply frame received for 5\nI0311 13:24:43.451554 1495 log.go:172] (0xc000a4e370) Data frame received for 5\nI0311 13:24:43.451584 1495 log.go:172] (0xc000886000) (5) Data frame handling\nI0311 13:24:43.451593 1495 log.go:172] (0xc000886000) (5) Data frame sent\nI0311 13:24:43.451599 1495 log.go:172] (0xc000a4e370) Data frame received for 5\nI0311 13:24:43.451605 1495 log.go:172] (0xc000886000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0311 13:24:43.451622 1495 log.go:172] (0xc000a4e370) Data frame received for 3\nI0311 13:24:43.451628 1495 log.go:172] (0xc00091a6e0) (3) Data frame handling\nI0311 13:24:43.451637 1495 log.go:172] (0xc00091a6e0) (3) Data frame sent\nI0311 13:24:43.451643 1495 log.go:172] (0xc000a4e370) Data frame received for 3\nI0311 13:24:43.451650 1495 log.go:172] (0xc00091a6e0) (3) Data frame handling\nI0311 13:24:43.452495 1495 log.go:172] (0xc000a4e370) Data frame received for 1\nI0311 13:24:43.452509 1495 log.go:172] (0xc00091a640) (1) Data frame handling\nI0311 13:24:43.452516 1495 log.go:172] (0xc00091a640) (1) Data frame sent\nI0311 13:24:43.452536 1495 log.go:172] (0xc000a4e370) (0xc00091a640) Stream removed, broadcasting: 1\nI0311 13:24:43.452556 1495 log.go:172] (0xc000a4e370) Go away received\nI0311 13:24:43.452797 1495 log.go:172] (0xc000a4e370) (0xc00091a640) Stream removed, broadcasting: 1\nI0311 13:24:43.452808 1495 log.go:172] (0xc000a4e370) (0xc00091a6e0) Stream removed, broadcasting: 3\nI0311 13:24:43.452815 1495 log.go:172] (0xc000a4e370) (0xc000886000) Stream removed, broadcasting: 5\n" Mar 11 13:24:43.455: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:24:43.455: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:24:43.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 11 13:24:53.463: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:24:53.463: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:24:53.463: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 11 13:24:53.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:24:53.653: INFO: stderr: "I0311 13:24:53.584893 1515 log.go:172] (0xc00094c420) (0xc0008b4640) Create stream\nI0311 13:24:53.584942 1515 log.go:172] (0xc00094c420) (0xc0008b4640) Stream added, broadcasting: 1\nI0311 13:24:53.586940 1515 log.go:172] (0xc00094c420) Reply frame received for 1\nI0311 13:24:53.586971 1515 log.go:172] (0xc00094c420) (0xc0008b46e0) Create stream\nI0311 13:24:53.586978 1515 log.go:172] (0xc00094c420) (0xc0008b46e0) Stream added, broadcasting: 3\nI0311 13:24:53.587731 1515 log.go:172] (0xc00094c420) Reply frame received for 3\nI0311 13:24:53.587752 1515 log.go:172] (0xc00094c420) (0xc0008b4780) Create stream\nI0311 13:24:53.587757 1515 log.go:172] (0xc00094c420) (0xc0008b4780) Stream added, broadcasting: 5\nI0311 13:24:53.588500 1515 log.go:172] (0xc00094c420) Reply frame received for 5\nI0311 13:24:53.648505 1515 log.go:172] (0xc00094c420) Data frame received for 5\nI0311 13:24:53.648537 1515 log.go:172] (0xc0008b4780) (5) Data frame handling\nI0311 13:24:53.648550 1515 log.go:172] (0xc0008b4780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:24:53.648576 1515 log.go:172] (0xc00094c420) Data frame received for 3\nI0311 13:24:53.648592 1515 log.go:172] (0xc0008b46e0) (3) Data frame handling\nI0311 13:24:53.648612 1515 log.go:172] (0xc0008b46e0) (3) Data frame sent\nI0311 13:24:53.648622 1515 log.go:172] (0xc00094c420) Data frame received for 3\nI0311 13:24:53.648630 1515 log.go:172] (0xc0008b46e0) (3) Data frame handling\nI0311 13:24:53.648766 1515 log.go:172] (0xc00094c420) Data frame received for 5\nI0311 13:24:53.648796 1515 log.go:172] (0xc0008b4780) (5) Data frame handling\nI0311 13:24:53.649944 1515 log.go:172] (0xc00094c420) Data frame received for 1\nI0311 13:24:53.649956 1515 log.go:172] (0xc0008b4640) (1) Data frame handling\nI0311 13:24:53.649961 1515 log.go:172] (0xc0008b4640) (1) Data frame sent\nI0311 13:24:53.649973 1515 log.go:172] (0xc00094c420) (0xc0008b4640) Stream removed, broadcasting: 1\nI0311 13:24:53.649986 1515 log.go:172] (0xc00094c420) Go away received\nI0311 13:24:53.650254 1515 log.go:172] (0xc00094c420) (0xc0008b4640) Stream removed, broadcasting: 1\nI0311 13:24:53.650272 1515 log.go:172] (0xc00094c420) (0xc0008b46e0) Stream removed, broadcasting: 3\nI0311 13:24:53.650278 1515 log.go:172] (0xc00094c420) (0xc0008b4780) Stream removed, broadcasting: 5\n" Mar 11 13:24:53.653: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:24:53.653: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:24:53.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:24:53.873: INFO: stderr: "I0311 13:24:53.780594 1535 log.go:172] (0xc00099a160) (0xc00082c5a0) Create stream\nI0311 13:24:53.780639 1535 log.go:172] (0xc00099a160) (0xc00082c5a0) Stream added, broadcasting: 1\nI0311 13:24:53.782722 1535 log.go:172] (0xc00099a160) Reply frame received for 1\nI0311 13:24:53.782761 1535 log.go:172] (0xc00099a160) (0xc0008ae000) Create stream\nI0311 13:24:53.782772 1535 log.go:172] (0xc00099a160) (0xc0008ae000) Stream added, broadcasting: 3\nI0311 13:24:53.783647 1535 log.go:172] (0xc00099a160) Reply frame received for 3\nI0311 13:24:53.783670 1535 log.go:172] (0xc00099a160) (0xc0008ae0a0) Create stream\nI0311 13:24:53.783675 1535 log.go:172] (0xc00099a160) (0xc0008ae0a0) Stream added, broadcasting: 5\nI0311 13:24:53.784322 1535 log.go:172] (0xc00099a160) Reply frame received for 5\nI0311 13:24:53.847676 1535 log.go:172] (0xc00099a160) Data frame received for 5\nI0311 13:24:53.847701 1535 log.go:172] (0xc0008ae0a0) (5) Data frame handling\nI0311 13:24:53.847715 1535 log.go:172] (0xc0008ae0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:24:53.867704 1535 log.go:172] (0xc00099a160) Data frame received for 3\nI0311 13:24:53.867721 1535 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0311 13:24:53.867751 1535 log.go:172] (0xc0008ae000) (3) Data frame sent\nI0311 13:24:53.868414 1535 log.go:172] (0xc00099a160) Data frame received for 3\nI0311 13:24:53.868447 1535 log.go:172] (0xc0008ae000) (3) Data frame handling\nI0311 13:24:53.868483 1535 log.go:172] (0xc00099a160) Data frame received for 5\nI0311 13:24:53.868495 1535 log.go:172] (0xc0008ae0a0) (5) Data frame handling\nI0311 13:24:53.869574 1535 log.go:172] (0xc00099a160) Data frame received for 1\nI0311 13:24:53.869591 1535 log.go:172] (0xc00082c5a0) (1) Data frame handling\nI0311 13:24:53.869602 1535 log.go:172] (0xc00082c5a0) (1) Data frame sent\nI0311 13:24:53.869615 1535 log.go:172] (0xc00099a160) (0xc00082c5a0) Stream removed, broadcasting: 1\nI0311 13:24:53.869952 1535 log.go:172] (0xc00099a160) (0xc00082c5a0) Stream removed, broadcasting: 1\nI0311 13:24:53.869972 1535 log.go:172] (0xc00099a160) (0xc0008ae000) Stream removed, broadcasting: 3\nI0311 13:24:53.869983 1535 log.go:172] (0xc00099a160) (0xc0008ae0a0) Stream removed, broadcasting: 5\n" Mar 11 13:24:53.873: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:24:53.873: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:24:53.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:24:54.065: INFO: stderr: "I0311 13:24:53.975743 1556 log.go:172] (0xc0009f4370) (0xc000548960) Create stream\nI0311 13:24:53.975783 1556 log.go:172] (0xc0009f4370) (0xc000548960) Stream added, broadcasting: 1\nI0311 13:24:53.977816 1556 log.go:172] (0xc0009f4370) Reply frame received for 1\nI0311 13:24:53.977842 1556 log.go:172] (0xc0009f4370) (0xc0009a6000) Create stream\nI0311 13:24:53.977850 1556 log.go:172] (0xc0009f4370) (0xc0009a6000) Stream added, broadcasting: 3\nI0311 13:24:53.978694 1556 log.go:172] (0xc0009f4370) Reply frame received for 3\nI0311 13:24:53.978719 1556 log.go:172] (0xc0009f4370) (0xc000548a00) Create stream\nI0311 13:24:53.978742 1556 log.go:172] (0xc0009f4370) (0xc000548a00) Stream added, broadcasting: 5\nI0311 13:24:53.979547 1556 log.go:172] (0xc0009f4370) Reply frame received for 5\nI0311 13:24:54.045190 1556 log.go:172] (0xc0009f4370) Data frame received for 5\nI0311 13:24:54.045220 1556 log.go:172] (0xc000548a00) (5) Data frame handling\nI0311 13:24:54.045236 1556 log.go:172] (0xc000548a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:24:54.059689 1556 log.go:172] (0xc0009f4370) Data frame received for 3\nI0311 13:24:54.059718 1556 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0311 13:24:54.059742 1556 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0311 13:24:54.059808 1556 log.go:172] (0xc0009f4370) Data frame received for 5\nI0311 13:24:54.059825 1556 log.go:172] (0xc000548a00) (5) Data frame handling\nI0311 13:24:54.060343 1556 log.go:172] (0xc0009f4370) Data frame received for 3\nI0311 13:24:54.060392 1556 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0311 13:24:54.061889 1556 log.go:172] (0xc0009f4370) Data frame received for 1\nI0311 13:24:54.061927 1556 log.go:172] (0xc000548960) (1) Data frame handling\nI0311 13:24:54.061942 1556 log.go:172] (0xc000548960) (1) Data frame sent\nI0311 13:24:54.062073 1556 log.go:172] (0xc0009f4370) (0xc000548960) Stream removed, broadcasting: 1\nI0311 13:24:54.062163 1556 log.go:172] (0xc0009f4370) Go away received\nI0311 13:24:54.062365 1556 log.go:172] (0xc0009f4370) (0xc000548960) Stream removed, broadcasting: 1\nI0311 13:24:54.062375 1556 log.go:172] (0xc0009f4370) (0xc0009a6000) Stream removed, broadcasting: 3\nI0311 13:24:54.062381 1556 log.go:172] (0xc0009f4370) (0xc000548a00) Stream removed, broadcasting: 5\n" Mar 11 13:24:54.065: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:24:54.065: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:24:54.065: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:24:54.068: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 11 13:25:04.073: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:25:04.073: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:25:04.073: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:25:04.081: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:04.081: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:04.082: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:04.082: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:04.082: INFO: Mar 11 13:25:04.082: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:05.085: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:05.085: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:05.085: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:05.085: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:05.085: INFO: Mar 11 13:25:05.085: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:06.089: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:06.089: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:06.089: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:06.089: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:06.089: INFO: Mar 11 13:25:06.089: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:07.091: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:07.091: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:07.091: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:07.091: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:07.091: INFO: Mar 11 13:25:07.091: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:08.094: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:08.094: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:08.094: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:08.094: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:08.094: INFO: Mar 11 13:25:08.094: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:09.097: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:09.097: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:09.098: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:09.098: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:09.098: INFO: Mar 11 13:25:09.098: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:10.101: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:10.101: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:10.101: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:10.101: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:10.101: INFO: Mar 11 13:25:10.101: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:11.105: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:11.105: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:11.105: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:11.105: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:11.105: INFO: Mar 11 13:25:11.105: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:12.110: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:12.110: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:12.110: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:12.110: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:12.110: INFO: Mar 11 13:25:12.110: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 11 13:25:13.114: INFO: POD NODE PHASE GRACE CONDITIONS Mar 11 13:25:13.114: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:12 +0000 UTC }] Mar 11 13:25:13.114: INFO: ss-1 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:13.114: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:24:32 +0000 UTC }] Mar 11 13:25:13.114: INFO: Mar 11 13:25:13.114: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7526 Mar 11 13:25:14.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:25:14.252: INFO: rc: 1 Mar 11 13:25:14.252: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002cb9f20 exit status 1 true [0xc00306c0b0 0xc00306c0c8 0xc00306c0e0] [0xc00306c0b0 0xc00306c0c8 0xc00306c0e0] [0xc00306c0c0 0xc00306c0d8] [0xba70e0 0xba70e0] 0xc002429a40 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 11 13:25:24.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:25:24.313: INFO: rc: 1 Mar 11 13:25:24.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019c1b90 exit status 1 true [0xc0008ce648 0xc0008ce720 0xc0008ce768] [0xc0008ce648 0xc0008ce720 0xc0008ce768] [0xc0008ce708 0xc0008ce748] [0xba70e0 0xba70e0] 0xc001d467e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:25:34.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:25:34.387: INFO: rc: 1 Mar 11 13:25:34.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019c1c80 exit status 1 true [0xc0008ce7b8 0xc0008ce860 0xc0008ce8f8] [0xc0008ce7b8 0xc0008ce860 0xc0008ce8f8] [0xc0008ce850 0xc0008ce8c8] [0xba70e0 0xba70e0] 0xc001d46c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:25:44.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:25:44.477: INFO: rc: 1 Mar 11 13:25:44.478: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002668090 exit status 1 true [0xc0014b4008 0xc0014b40f8 0xc0014b41b0] [0xc0014b4008 0xc0014b40f8 0xc0014b41b0] [0xc0014b40a8 0xc0014b4188] [0xba70e0 0xba70e0] 0xc00199a240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:25:54.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:25:54.555: INFO: rc: 1 Mar 11 13:25:54.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030940f0 exit status 1 true [0xc000010338 0xc0008ce030 0xc0008ce080] [0xc000010338 0xc0008ce030 0xc0008ce080] [0xc0008ce018 0xc0008ce078] [0xba70e0 0xba70e0] 0xc002ab6540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:04.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:04.616: INFO: rc: 1 Mar 11 13:26:04.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c0090 exit status 1 true [0xc00198c008 0xc00198c058 0xc00198c090] [0xc00198c008 0xc00198c058 0xc00198c090] [0xc00198c040 0xc00198c080] [0xba70e0 0xba70e0] 0xc001c33500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:14.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:14.695: INFO: rc: 1 Mar 11 13:26:14.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002198090 exit status 1 true [0xc001a84008 0xc001a84088 0xc001a840c0] [0xc001a84008 0xc001a84088 0xc001a840c0] [0xc001a84060 0xc001a840b8] [0xba70e0 0xba70e0] 0xc0017c67e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:24.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:24.814: INFO: rc: 1 Mar 11 13:26:24.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030941e0 exit status 1 true [0xc0008ce088 0xc0008ce0e8 0xc0008ce110] [0xc0008ce088 0xc0008ce0e8 0xc0008ce110] [0xc0008ce0d0 0xc0008ce100] [0xba70e0 0xba70e0] 0xc002ab6de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:34.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:34.909: INFO: rc: 1 Mar 11 13:26:34.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c01e0 exit status 1 true [0xc00198c0a8 0xc00198c0e8 0xc00198c118] [0xc00198c0a8 0xc00198c0e8 0xc00198c118] [0xc00198c0d8 0xc00198c108] [0xba70e0 0xba70e0] 0xc00192e6c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:44.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:45.001: INFO: rc: 1 Mar 11 13:26:45.001: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c02a0 exit status 1 true [0xc00198c130 0xc00198c160 0xc00198c1a8] [0xc00198c130 0xc00198c160 0xc00198c1a8] [0xc00198c150 0xc00198c188] [0xba70e0 0xba70e0] 0xc001d53800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:26:55.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:26:55.117: INFO: rc: 1 Mar 11 13:26:55.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030942a0 exit status 1 true [0xc0008ce148 0xc0008ce198 0xc0008ce230] [0xc0008ce148 0xc0008ce198 0xc0008ce230] [0xc0008ce170 0xc0008ce200] [0xba70e0 0xba70e0] 0xc002ab7500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:05.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:05.231: INFO: rc: 1 Mar 11 13:27:05.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094390 exit status 1 true [0xc0008ce238 0xc0008ce280 0xc0008ce2a0] [0xc0008ce238 0xc0008ce280 0xc0008ce2a0] [0xc0008ce270 0xc0008ce298] [0xba70e0 0xba70e0] 0xc002ab78c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:15.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:15.317: INFO: rc: 1 Mar 11 13:27:15.318: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094480 exit status 1 true [0xc0008ce2a8 0xc0008ce368 0xc0008ce390] [0xc0008ce2a8 0xc0008ce368 0xc0008ce390] [0xc0008ce350 0xc0008ce388] [0xba70e0 0xba70e0] 0xc002ab7c80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:25.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:25.426: INFO: rc: 1 Mar 11 13:27:25.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030945a0 exit status 1 true [0xc0008ce3b0 0xc0008ce440 0xc0008ce470] [0xc0008ce3b0 0xc0008ce440 0xc0008ce470] [0xc0008ce418 0xc0008ce458] [0xba70e0 0xba70e0] 0xc001dc2780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:35.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:35.544: INFO: rc: 1 Mar 11 13:27:35.544: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094660 exit status 1 true [0xc0008ce4b8 0xc0008ce518 0xc0008ce590] [0xc0008ce4b8 0xc0008ce518 0xc0008ce590] [0xc0008ce4f8 0xc0008ce578] [0xba70e0 0xba70e0] 0xc00203c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:45.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:45.652: INFO: rc: 1 Mar 11 13:27:45.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026681e0 exit status 1 true [0xc0014b41e8 0xc0014b4240 0xc0014b4278] [0xc0014b41e8 0xc0014b4240 0xc0014b4278] [0xc0014b4230 0xc0014b4258] [0xba70e0 0xba70e0] 0xc00199a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:27:55.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:27:55.776: INFO: rc: 1 Mar 11 13:27:55.776: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c00c0 exit status 1 true [0xc000010338 0xc001a84060 0xc001a840b8] [0xc000010338 0xc001a84060 0xc001a840b8] [0xc001a84040 0xc001a840b0] [0xba70e0 0xba70e0] 0xc001dc3680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:05.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:05.898: INFO: rc: 1 Mar 11 13:28:05.898: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026680c0 exit status 1 true [0xc00198c008 0xc00198c058 0xc00198c090] [0xc00198c008 0xc00198c058 0xc00198c090] [0xc00198c040 0xc00198c080] [0xba70e0 0xba70e0] 0xc00192f320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:15.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:15.999: INFO: rc: 1 Mar 11 13:28:15.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0030940c0 exit status 1 true [0xc0014b4000 0xc0014b40a8 0xc0014b4188] [0xc0014b4000 0xc0014b40a8 0xc0014b4188] [0xc0014b4028 0xc0014b4120] [0xba70e0 0xba70e0] 0xc001c33500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:25.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:26.115: INFO: rc: 1 Mar 11 13:28:26.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002668270 exit status 1 true [0xc00198c0a8 0xc00198c0e8 0xc00198c118] [0xc00198c0a8 0xc00198c0e8 0xc00198c118] [0xc00198c0d8 0xc00198c108] [0xba70e0 0xba70e0] 0xc002ab64e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:36.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:36.232: INFO: rc: 1 Mar 11 13:28:36.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002668330 exit status 1 true [0xc00198c130 0xc00198c160 0xc00198c1a8] [0xc00198c130 0xc00198c160 0xc00198c1a8] [0xc00198c150 0xc00198c188] [0xba70e0 0xba70e0] 0xc002ab6ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:46.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:46.332: INFO: rc: 1 Mar 11 13:28:46.332: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c0270 exit status 1 true [0xc001a840c0 0xc001a840e0 0xc001a840f8] [0xc001a840c0 0xc001a840e0 0xc001a840f8] [0xc001a840d0 0xc001a840f0] [0xba70e0 0xba70e0] 0xc0017c66c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:28:56.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:28:56.442: INFO: rc: 1 Mar 11 13:28:56.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c0360 exit status 1 true [0xc001a84100 0xc001a84118 0xc001a84148] [0xc001a84100 0xc001a84118 0xc001a84148] [0xc001a84110 0xc001a84140] [0xba70e0 0xba70e0] 0xc0017c6fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:06.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:06.552: INFO: rc: 1 Mar 11 13:29:06.552: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002198120 exit status 1 true [0xc0008ce008 0xc0008ce050 0xc0008ce088] [0xc0008ce008 0xc0008ce050 0xc0008ce088] [0xc0008ce030 0xc0008ce080] [0xba70e0 0xba70e0] 0xc00199a0c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:16.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:16.615: INFO: rc: 1 Mar 11 13:29:16.615: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094210 exit status 1 true [0xc0014b41b0 0xc0014b4308 0xc0014b4370] [0xc0014b41b0 0xc0014b4308 0xc0014b4370] [0xc0014b42e0 0xc0014b4358] [0xba70e0 0xba70e0] 0xc00203ca20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:26.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:26.728: INFO: rc: 1 Mar 11 13:29:26.728: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002198210 exit status 1 true [0xc0008ce0b0 0xc0008ce0f0 0xc0008ce148] [0xc0008ce0b0 0xc0008ce0f0 0xc0008ce148] [0xc0008ce0e8 0xc0008ce110] [0xba70e0 0xba70e0] 0xc00199a960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:36.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:36.841: INFO: rc: 1 Mar 11 13:29:36.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094300 exit status 1 true [0xc0014b43b0 0xc0014b43d8 0xc0014b4460] [0xc0014b43b0 0xc0014b43d8 0xc0014b4460] [0xc0014b43c8 0xc0014b4440] [0xba70e0 0xba70e0] 0xc001f54360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:46.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:46.963: INFO: rc: 1 Mar 11 13:29:46.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002198300 exit status 1 true [0xc0008ce170 0xc0008ce200 0xc0008ce248] [0xc0008ce170 0xc0008ce200 0xc0008ce248] [0xc0008ce1d0 0xc0008ce238] [0xba70e0 0xba70e0] 0xc00199b320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:29:56.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:29:57.076: INFO: rc: 1 Mar 11 13:29:57.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0026c0090 exit status 1 true [0xc000010338 0xc0014b4028 0xc0014b4120] [0xc000010338 0xc0014b4028 0xc0014b4120] [0xc0014b4008 0xc0014b40f8] [0xba70e0 0xba70e0] 0xc00203d8c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:30:07.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:30:07.193: INFO: rc: 1 Mar 11 13:30:07.193: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc003094120 exit status 1 true [0xc001a84008 0xc001a84088 0xc001a840c0] [0xc001a84008 0xc001a84088 0xc001a840c0] [0xc001a84060 0xc001a840b8] [0xba70e0 0xba70e0] 0xc001c32ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 11 13:30:17.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7526 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:30:17.303: INFO: rc: 1 Mar 11 13:30:17.303: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 11 13:30:17.303: INFO: Scaling statefulset ss to 0 Mar 11 13:30:17.309: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 11 13:30:17.311: INFO: Deleting all statefulset in ns statefulset-7526 Mar 11 13:30:17.313: INFO: Scaling statefulset ss to 0 Mar 11 13:30:17.319: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:30:17.321: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:30:17.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7526" for this suite. Mar 11 13:30:23.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:30:23.460: INFO: namespace statefulset-7526 deletion completed in 6.110775494s • [SLOW TEST:370.941 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:30:23.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-a7daf072-8a69-40d0-8e3e-c7cc35238c61 STEP: Creating a pod to test consume configMaps Mar 11 13:30:23.513: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da" in namespace "projected-6236" to be "success or failure" Mar 11 13:30:23.523: INFO: Pod "pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.039644ms Mar 11 13:30:25.527: INFO: Pod "pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013930174s Mar 11 13:30:27.532: INFO: Pod "pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018698174s STEP: Saw pod success Mar 11 13:30:27.532: INFO: Pod "pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da" satisfied condition "success or failure" Mar 11 13:30:27.536: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da container projected-configmap-volume-test: STEP: delete the pod Mar 11 13:30:27.575: INFO: Waiting for pod pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da to disappear Mar 11 13:30:27.579: INFO: Pod pod-projected-configmaps-5ea484a3-186f-4497-a4f8-9b25451013da no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:30:27.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6236" for this suite. Mar 11 13:30:33.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:30:33.694: INFO: namespace projected-6236 deletion completed in 6.111482755s • [SLOW TEST:10.234 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:30:33.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3457 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3457 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3457 Mar 11 13:30:33.819: INFO: Found 0 stateful pods, waiting for 1 Mar 11 13:30:43.824: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 11 13:30:43.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:30:44.060: INFO: stderr: "I0311 13:30:43.963818 2173 log.go:172] (0xc000956580) (0xc000612a00) Create stream\nI0311 13:30:43.963864 2173 log.go:172] (0xc000956580) (0xc000612a00) Stream added, broadcasting: 1\nI0311 13:30:43.966044 2173 log.go:172] (0xc000956580) Reply frame received for 1\nI0311 13:30:43.966087 2173 log.go:172] (0xc000956580) (0xc00093a000) Create stream\nI0311 13:30:43.966105 2173 log.go:172] (0xc000956580) (0xc00093a000) Stream added, broadcasting: 3\nI0311 13:30:43.967046 2173 log.go:172] (0xc000956580) Reply frame received for 3\nI0311 13:30:43.967092 2173 log.go:172] (0xc000956580) (0xc0009bc000) Create stream\nI0311 13:30:43.967114 2173 log.go:172] (0xc000956580) (0xc0009bc000) Stream added, broadcasting: 5\nI0311 13:30:43.968087 2173 log.go:172] (0xc000956580) Reply frame received for 5\nI0311 13:30:44.032645 2173 log.go:172] (0xc000956580) Data frame received for 5\nI0311 13:30:44.032689 2173 log.go:172] (0xc0009bc000) (5) Data frame handling\nI0311 13:30:44.032705 2173 log.go:172] (0xc0009bc000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:30:44.054740 2173 log.go:172] (0xc000956580) Data frame received for 5\nI0311 13:30:44.054755 2173 log.go:172] (0xc0009bc000) (5) Data frame handling\nI0311 13:30:44.054829 2173 log.go:172] (0xc000956580) Data frame received for 3\nI0311 13:30:44.054877 2173 log.go:172] (0xc00093a000) (3) Data frame handling\nI0311 13:30:44.054893 2173 log.go:172] (0xc00093a000) (3) Data frame sent\nI0311 13:30:44.054904 2173 log.go:172] (0xc000956580) Data frame received for 3\nI0311 13:30:44.054909 2173 log.go:172] (0xc00093a000) (3) Data frame handling\nI0311 13:30:44.056309 2173 log.go:172] (0xc000956580) Data frame received for 1\nI0311 13:30:44.056329 2173 log.go:172] (0xc000612a00) (1) Data frame handling\nI0311 13:30:44.056341 2173 log.go:172] (0xc000612a00) (1) Data frame sent\nI0311 13:30:44.056381 2173 log.go:172] (0xc000956580) (0xc000612a00) Stream removed, broadcasting: 1\nI0311 13:30:44.056433 2173 log.go:172] (0xc000956580) Go away received\nI0311 13:30:44.057119 2173 log.go:172] (0xc000956580) (0xc000612a00) Stream removed, broadcasting: 1\nI0311 13:30:44.057140 2173 log.go:172] (0xc000956580) (0xc00093a000) Stream removed, broadcasting: 3\nI0311 13:30:44.057164 2173 log.go:172] (0xc000956580) (0xc0009bc000) Stream removed, broadcasting: 5\n" Mar 11 13:30:44.060: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:30:44.060: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:30:44.063: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 11 13:30:54.068: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:30:54.068: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:30:54.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999632s Mar 11 13:30:55.096: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985444707s Mar 11 13:30:56.099: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.98079496s Mar 11 13:30:57.104: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976985436s Mar 11 13:30:58.108: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.97281592s Mar 11 13:30:59.113: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.968202763s Mar 11 13:31:00.117: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963658866s Mar 11 13:31:01.122: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.959486195s Mar 11 13:31:02.126: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954814612s Mar 11 13:31:03.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 950.895269ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3457 Mar 11 13:31:04.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:31:04.332: INFO: stderr: "I0311 13:31:04.275379 2193 log.go:172] (0xc000116630) (0xc0007ec640) Create stream\nI0311 13:31:04.275425 2193 log.go:172] (0xc000116630) (0xc0007ec640) Stream added, broadcasting: 1\nI0311 13:31:04.277336 2193 log.go:172] (0xc000116630) Reply frame received for 1\nI0311 13:31:04.277374 2193 log.go:172] (0xc000116630) (0xc00083e000) Create stream\nI0311 13:31:04.277386 2193 log.go:172] (0xc000116630) (0xc00083e000) Stream added, broadcasting: 3\nI0311 13:31:04.278098 2193 log.go:172] (0xc000116630) Reply frame received for 3\nI0311 13:31:04.278150 2193 log.go:172] (0xc000116630) (0xc00083e0a0) Create stream\nI0311 13:31:04.278160 2193 log.go:172] (0xc000116630) (0xc00083e0a0) Stream added, broadcasting: 5\nI0311 13:31:04.278929 2193 log.go:172] (0xc000116630) Reply frame received for 5\nI0311 13:31:04.327718 2193 log.go:172] (0xc000116630) Data frame received for 3\nI0311 13:31:04.327736 2193 log.go:172] (0xc00083e000) (3) Data frame handling\nI0311 13:31:04.327745 2193 log.go:172] (0xc00083e000) (3) Data frame sent\nI0311 13:31:04.327749 2193 log.go:172] (0xc000116630) Data frame received for 3\nI0311 13:31:04.327753 2193 log.go:172] (0xc00083e000) (3) Data frame handling\nI0311 13:31:04.327827 2193 log.go:172] (0xc000116630) Data frame received for 5\nI0311 13:31:04.327836 2193 log.go:172] (0xc00083e0a0) (5) Data frame handling\nI0311 13:31:04.327845 2193 log.go:172] (0xc00083e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:31:04.328129 2193 log.go:172] (0xc000116630) Data frame received for 5\nI0311 13:31:04.328149 2193 log.go:172] (0xc00083e0a0) (5) Data frame handling\nI0311 13:31:04.328943 2193 log.go:172] (0xc000116630) Data frame received for 1\nI0311 13:31:04.328963 2193 log.go:172] (0xc0007ec640) (1) Data frame handling\nI0311 13:31:04.328973 2193 log.go:172] (0xc0007ec640) (1) Data frame sent\nI0311 13:31:04.329019 2193 log.go:172] (0xc000116630) (0xc0007ec640) Stream removed, broadcasting: 1\nI0311 13:31:04.329039 2193 log.go:172] (0xc000116630) Go away received\nI0311 13:31:04.329322 2193 log.go:172] (0xc000116630) (0xc0007ec640) Stream removed, broadcasting: 1\nI0311 13:31:04.329343 2193 log.go:172] (0xc000116630) (0xc00083e000) Stream removed, broadcasting: 3\nI0311 13:31:04.329352 2193 log.go:172] (0xc000116630) (0xc00083e0a0) Stream removed, broadcasting: 5\n" Mar 11 13:31:04.332: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:31:04.332: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:31:04.347: INFO: Found 1 stateful pods, waiting for 3 Mar 11 13:31:14.352: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:31:14.352: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:31:14.352: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 11 13:31:14.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:31:14.552: INFO: stderr: "I0311 13:31:14.482079 2211 log.go:172] (0xc000a360b0) (0xc0008be0a0) Create stream\nI0311 13:31:14.482163 2211 log.go:172] (0xc000a360b0) (0xc0008be0a0) Stream added, broadcasting: 1\nI0311 13:31:14.483840 2211 log.go:172] (0xc000a360b0) Reply frame received for 1\nI0311 13:31:14.483867 2211 log.go:172] (0xc000a360b0) (0xc0008be140) Create stream\nI0311 13:31:14.483874 2211 log.go:172] (0xc000a360b0) (0xc0008be140) Stream added, broadcasting: 3\nI0311 13:31:14.484649 2211 log.go:172] (0xc000a360b0) Reply frame received for 3\nI0311 13:31:14.484674 2211 log.go:172] (0xc000a360b0) (0xc000a22000) Create stream\nI0311 13:31:14.484684 2211 log.go:172] (0xc000a360b0) (0xc000a22000) Stream added, broadcasting: 5\nI0311 13:31:14.485588 2211 log.go:172] (0xc000a360b0) Reply frame received for 5\nI0311 13:31:14.548001 2211 log.go:172] (0xc000a360b0) Data frame received for 3\nI0311 13:31:14.548044 2211 log.go:172] (0xc0008be140) (3) Data frame handling\nI0311 13:31:14.548056 2211 log.go:172] (0xc0008be140) (3) Data frame sent\nI0311 13:31:14.548064 2211 log.go:172] (0xc000a360b0) Data frame received for 3\nI0311 13:31:14.548085 2211 log.go:172] (0xc000a360b0) Data frame received for 5\nI0311 13:31:14.548109 2211 log.go:172] (0xc000a22000) (5) Data frame handling\nI0311 13:31:14.548117 2211 log.go:172] (0xc000a22000) (5) Data frame sent\nI0311 13:31:14.548123 2211 log.go:172] (0xc000a360b0) Data frame received for 5\nI0311 13:31:14.548136 2211 log.go:172] (0xc000a22000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:31:14.548154 2211 log.go:172] (0xc0008be140) (3) Data frame handling\nI0311 13:31:14.549283 2211 log.go:172] (0xc000a360b0) Data frame received for 1\nI0311 13:31:14.549298 2211 log.go:172] (0xc0008be0a0) (1) Data frame handling\nI0311 13:31:14.549305 2211 log.go:172] (0xc0008be0a0) (1) Data frame sent\nI0311 13:31:14.549314 2211 log.go:172] (0xc000a360b0) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0311 13:31:14.549356 2211 log.go:172] (0xc000a360b0) Go away received\nI0311 13:31:14.549581 2211 log.go:172] (0xc000a360b0) (0xc0008be0a0) Stream removed, broadcasting: 1\nI0311 13:31:14.549599 2211 log.go:172] (0xc000a360b0) (0xc0008be140) Stream removed, broadcasting: 3\nI0311 13:31:14.549606 2211 log.go:172] (0xc000a360b0) (0xc000a22000) Stream removed, broadcasting: 5\n" Mar 11 13:31:14.552: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:31:14.552: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:31:14.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:31:14.730: INFO: stderr: "I0311 13:31:14.652443 2231 log.go:172] (0xc000116dc0) (0xc000336820) Create stream\nI0311 13:31:14.652481 2231 log.go:172] (0xc000116dc0) (0xc000336820) Stream added, broadcasting: 1\nI0311 13:31:14.654733 2231 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0311 13:31:14.654758 2231 log.go:172] (0xc000116dc0) (0xc000336000) Create stream\nI0311 13:31:14.654764 2231 log.go:172] (0xc000116dc0) (0xc000336000) Stream added, broadcasting: 3\nI0311 13:31:14.655441 2231 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0311 13:31:14.655481 2231 log.go:172] (0xc000116dc0) (0xc000616280) Create stream\nI0311 13:31:14.655492 2231 log.go:172] (0xc000116dc0) (0xc000616280) Stream added, broadcasting: 5\nI0311 13:31:14.656348 2231 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0311 13:31:14.707515 2231 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:31:14.707537 2231 log.go:172] (0xc000616280) (5) Data frame handling\nI0311 13:31:14.707552 2231 log.go:172] (0xc000616280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:31:14.724191 2231 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:31:14.724236 2231 log.go:172] (0xc000336000) (3) Data frame handling\nI0311 13:31:14.724247 2231 log.go:172] (0xc000336000) (3) Data frame sent\nI0311 13:31:14.724253 2231 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:31:14.724257 2231 log.go:172] (0xc000336000) (3) Data frame handling\nI0311 13:31:14.724280 2231 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:31:14.724286 2231 log.go:172] (0xc000616280) (5) Data frame handling\nI0311 13:31:14.725700 2231 log.go:172] (0xc000116dc0) Data frame received for 1\nI0311 13:31:14.725716 2231 log.go:172] (0xc000336820) (1) Data frame handling\nI0311 13:31:14.725730 2231 log.go:172] (0xc000336820) (1) Data frame sent\nI0311 13:31:14.725739 2231 log.go:172] (0xc000116dc0) (0xc000336820) Stream removed, broadcasting: 1\nI0311 13:31:14.725757 2231 log.go:172] (0xc000116dc0) Go away received\nI0311 13:31:14.726043 2231 log.go:172] (0xc000116dc0) (0xc000336820) Stream removed, broadcasting: 1\nI0311 13:31:14.726058 2231 log.go:172] (0xc000116dc0) (0xc000336000) Stream removed, broadcasting: 3\nI0311 13:31:14.726065 2231 log.go:172] (0xc000116dc0) (0xc000616280) Stream removed, broadcasting: 5\n" Mar 11 13:31:14.730: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:31:14.730: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:31:14.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 11 13:31:14.927: INFO: stderr: "I0311 13:31:14.832334 2251 log.go:172] (0xc000116dc0) (0xc0001f46e0) Create stream\nI0311 13:31:14.832386 2251 log.go:172] (0xc000116dc0) (0xc0001f46e0) Stream added, broadcasting: 1\nI0311 13:31:14.834043 2251 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0311 13:31:14.834072 2251 log.go:172] (0xc000116dc0) (0xc0008b4000) Create stream\nI0311 13:31:14.834084 2251 log.go:172] (0xc000116dc0) (0xc0008b4000) Stream added, broadcasting: 3\nI0311 13:31:14.834678 2251 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0311 13:31:14.834703 2251 log.go:172] (0xc000116dc0) (0xc0008c8000) Create stream\nI0311 13:31:14.834717 2251 log.go:172] (0xc000116dc0) (0xc0008c8000) Stream added, broadcasting: 5\nI0311 13:31:14.835389 2251 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0311 13:31:14.904574 2251 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:31:14.904595 2251 log.go:172] (0xc0008c8000) (5) Data frame handling\nI0311 13:31:14.904616 2251 log.go:172] (0xc0008c8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0311 13:31:14.922177 2251 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:31:14.922196 2251 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0311 13:31:14.922208 2251 log.go:172] (0xc0008b4000) (3) Data frame sent\nI0311 13:31:14.922375 2251 log.go:172] (0xc000116dc0) Data frame received for 3\nI0311 13:31:14.922386 2251 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0311 13:31:14.922653 2251 log.go:172] (0xc000116dc0) Data frame received for 5\nI0311 13:31:14.922679 2251 log.go:172] (0xc0008c8000) (5) Data frame handling\nI0311 13:31:14.924153 2251 log.go:172] (0xc000116dc0) Data frame received for 1\nI0311 13:31:14.924182 2251 log.go:172] (0xc0001f46e0) (1) Data frame handling\nI0311 13:31:14.924200 2251 log.go:172] (0xc0001f46e0) (1) Data frame sent\nI0311 13:31:14.924217 2251 log.go:172] (0xc000116dc0) (0xc0001f46e0) Stream removed, broadcasting: 1\nI0311 13:31:14.924231 2251 log.go:172] (0xc000116dc0) Go away received\nI0311 13:31:14.924662 2251 log.go:172] (0xc000116dc0) (0xc0001f46e0) Stream removed, broadcasting: 1\nI0311 13:31:14.924682 2251 log.go:172] (0xc000116dc0) (0xc0008b4000) Stream removed, broadcasting: 3\nI0311 13:31:14.924691 2251 log.go:172] (0xc000116dc0) (0xc0008c8000) Stream removed, broadcasting: 5\n" Mar 11 13:31:14.927: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 11 13:31:14.928: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 11 13:31:14.928: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:31:14.931: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 11 13:31:24.939: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:31:24.939: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:31:24.939: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 11 13:31:24.953: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999391s Mar 11 13:31:25.958: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992321838s Mar 11 13:31:26.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987546194s Mar 11 13:31:27.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982735973s Mar 11 13:31:28.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978212906s Mar 11 13:31:29.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973492171s Mar 11 13:31:30.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968309714s Mar 11 13:31:31.987: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963085073s Mar 11 13:31:32.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957866364s Mar 11 13:31:33.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.858809ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3457 Mar 11 13:31:35.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:31:35.218: INFO: stderr: "I0311 13:31:35.137621 2271 log.go:172] (0xc000116fd0) (0xc000516be0) Create stream\nI0311 13:31:35.137674 2271 log.go:172] (0xc000116fd0) (0xc000516be0) Stream added, broadcasting: 1\nI0311 13:31:35.140165 2271 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0311 13:31:35.140210 2271 log.go:172] (0xc000116fd0) (0xc000a64000) Create stream\nI0311 13:31:35.140221 2271 log.go:172] (0xc000116fd0) (0xc000a64000) Stream added, broadcasting: 3\nI0311 13:31:35.140948 2271 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0311 13:31:35.140973 2271 log.go:172] (0xc000116fd0) (0xc00076a000) Create stream\nI0311 13:31:35.140982 2271 log.go:172] (0xc000116fd0) (0xc00076a000) Stream added, broadcasting: 5\nI0311 13:31:35.141738 2271 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0311 13:31:35.213472 2271 log.go:172] (0xc000116fd0) Data frame received for 3\nI0311 13:31:35.213499 2271 log.go:172] (0xc000a64000) (3) Data frame handling\nI0311 13:31:35.213517 2271 log.go:172] (0xc000a64000) (3) Data frame sent\nI0311 13:31:35.213528 2271 log.go:172] (0xc000116fd0) Data frame received for 3\nI0311 13:31:35.213539 2271 log.go:172] (0xc000a64000) (3) Data frame handling\nI0311 13:31:35.213826 2271 log.go:172] (0xc000116fd0) Data frame received for 5\nI0311 13:31:35.213849 2271 log.go:172] (0xc00076a000) (5) Data frame handling\nI0311 13:31:35.213864 2271 log.go:172] (0xc00076a000) (5) Data frame sent\nI0311 13:31:35.213871 2271 log.go:172] (0xc000116fd0) Data frame received for 5\nI0311 13:31:35.213882 2271 log.go:172] (0xc00076a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:31:35.215345 2271 log.go:172] (0xc000116fd0) Data frame received for 1\nI0311 13:31:35.215362 2271 log.go:172] (0xc000516be0) (1) Data frame handling\nI0311 13:31:35.215378 2271 log.go:172] (0xc000516be0) (1) Data frame sent\nI0311 13:31:35.215395 2271 log.go:172] (0xc000116fd0) (0xc000516be0) Stream removed, broadcasting: 1\nI0311 13:31:35.215423 2271 log.go:172] (0xc000116fd0) Go away received\nI0311 13:31:35.215725 2271 log.go:172] (0xc000116fd0) (0xc000516be0) Stream removed, broadcasting: 1\nI0311 13:31:35.215745 2271 log.go:172] (0xc000116fd0) (0xc000a64000) Stream removed, broadcasting: 3\nI0311 13:31:35.215754 2271 log.go:172] (0xc000116fd0) (0xc00076a000) Stream removed, broadcasting: 5\n" Mar 11 13:31:35.218: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:31:35.218: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:31:35.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:31:35.407: INFO: stderr: "I0311 13:31:35.334896 2291 log.go:172] (0xc000978420) (0xc00035e820) Create stream\nI0311 13:31:35.334937 2291 log.go:172] (0xc000978420) (0xc00035e820) Stream added, broadcasting: 1\nI0311 13:31:35.344362 2291 log.go:172] (0xc000978420) Reply frame received for 1\nI0311 13:31:35.344426 2291 log.go:172] (0xc000978420) (0xc000a3e000) Create stream\nI0311 13:31:35.344444 2291 log.go:172] (0xc000978420) (0xc000a3e000) Stream added, broadcasting: 3\nI0311 13:31:35.347224 2291 log.go:172] (0xc000978420) Reply frame received for 3\nI0311 13:31:35.347250 2291 log.go:172] (0xc000978420) (0xc00061c280) Create stream\nI0311 13:31:35.347257 2291 log.go:172] (0xc000978420) (0xc00061c280) Stream added, broadcasting: 5\nI0311 13:31:35.348207 2291 log.go:172] (0xc000978420) Reply frame received for 5\nI0311 13:31:35.403512 2291 log.go:172] (0xc000978420) Data frame received for 5\nI0311 13:31:35.403536 2291 log.go:172] (0xc00061c280) (5) Data frame handling\nI0311 13:31:35.403543 2291 log.go:172] (0xc00061c280) (5) Data frame sent\nI0311 13:31:35.403548 2291 log.go:172] (0xc000978420) Data frame received for 5\nI0311 13:31:35.403551 2291 log.go:172] (0xc00061c280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:31:35.403565 2291 log.go:172] (0xc000978420) Data frame received for 3\nI0311 13:31:35.403569 2291 log.go:172] (0xc000a3e000) (3) Data frame handling\nI0311 13:31:35.403573 2291 log.go:172] (0xc000a3e000) (3) Data frame sent\nI0311 13:31:35.403577 2291 log.go:172] (0xc000978420) Data frame received for 3\nI0311 13:31:35.403580 2291 log.go:172] (0xc000a3e000) (3) Data frame handling\nI0311 13:31:35.404705 2291 log.go:172] (0xc000978420) Data frame received for 1\nI0311 13:31:35.404721 2291 log.go:172] (0xc00035e820) (1) Data frame handling\nI0311 13:31:35.404728 2291 log.go:172] (0xc00035e820) (1) Data frame sent\nI0311 13:31:35.404738 2291 log.go:172] (0xc000978420) (0xc00035e820) Stream removed, broadcasting: 1\nI0311 13:31:35.404751 2291 log.go:172] (0xc000978420) Go away received\nI0311 13:31:35.404997 2291 log.go:172] (0xc000978420) (0xc00035e820) Stream removed, broadcasting: 1\nI0311 13:31:35.405008 2291 log.go:172] (0xc000978420) (0xc000a3e000) Stream removed, broadcasting: 3\nI0311 13:31:35.405013 2291 log.go:172] (0xc000978420) (0xc00061c280) Stream removed, broadcasting: 5\n" Mar 11 13:31:35.407: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:31:35.407: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:31:35.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3457 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 11 13:31:35.565: INFO: stderr: "I0311 13:31:35.507291 2311 log.go:172] (0xc000a2a420) (0xc0002786e0) Create stream\nI0311 13:31:35.507333 2311 log.go:172] (0xc000a2a420) (0xc0002786e0) Stream added, broadcasting: 1\nI0311 13:31:35.508907 2311 log.go:172] (0xc000a2a420) Reply frame received for 1\nI0311 13:31:35.508940 2311 log.go:172] (0xc000a2a420) (0xc0008cc000) Create stream\nI0311 13:31:35.508950 2311 log.go:172] (0xc000a2a420) (0xc0008cc000) Stream added, broadcasting: 3\nI0311 13:31:35.509660 2311 log.go:172] (0xc000a2a420) Reply frame received for 3\nI0311 13:31:35.509687 2311 log.go:172] (0xc000a2a420) (0xc0009b0000) Create stream\nI0311 13:31:35.509698 2311 log.go:172] (0xc000a2a420) (0xc0009b0000) Stream added, broadcasting: 5\nI0311 13:31:35.510240 2311 log.go:172] (0xc000a2a420) Reply frame received for 5\nI0311 13:31:35.560260 2311 log.go:172] (0xc000a2a420) Data frame received for 3\nI0311 13:31:35.560288 2311 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0311 13:31:35.560298 2311 log.go:172] (0xc0008cc000) (3) Data frame sent\nI0311 13:31:35.560305 2311 log.go:172] (0xc000a2a420) Data frame received for 3\nI0311 13:31:35.560312 2311 log.go:172] (0xc0008cc000) (3) Data frame handling\nI0311 13:31:35.560360 2311 log.go:172] (0xc000a2a420) Data frame received for 5\nI0311 13:31:35.560388 2311 log.go:172] (0xc0009b0000) (5) Data frame handling\nI0311 13:31:35.560406 2311 log.go:172] (0xc0009b0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0311 13:31:35.560428 2311 log.go:172] (0xc000a2a420) Data frame received for 5\nI0311 13:31:35.560439 2311 log.go:172] (0xc0009b0000) (5) Data frame handling\nI0311 13:31:35.561430 2311 log.go:172] (0xc000a2a420) Data frame received for 1\nI0311 13:31:35.561449 2311 log.go:172] (0xc0002786e0) (1) Data frame handling\nI0311 13:31:35.561459 2311 log.go:172] (0xc0002786e0) (1) Data frame sent\nI0311 13:31:35.561640 2311 log.go:172] (0xc000a2a420) (0xc0002786e0) Stream removed, broadcasting: 1\nI0311 13:31:35.561896 2311 log.go:172] (0xc000a2a420) Go away received\nI0311 13:31:35.561986 2311 log.go:172] (0xc000a2a420) (0xc0002786e0) Stream removed, broadcasting: 1\nI0311 13:31:35.562005 2311 log.go:172] (0xc000a2a420) (0xc0008cc000) Stream removed, broadcasting: 3\nI0311 13:31:35.562015 2311 log.go:172] (0xc000a2a420) (0xc0009b0000) Stream removed, broadcasting: 5\n" Mar 11 13:31:35.565: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 11 13:31:35.565: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 11 13:31:35.565: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 11 13:32:05.583: INFO: Deleting all statefulset in ns statefulset-3457 Mar 11 13:32:05.586: INFO: Scaling statefulset ss to 0 Mar 11 13:32:05.595: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:32:05.598: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:32:05.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3457" for this suite. Mar 11 13:32:11.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:32:11.703: INFO: namespace statefulset-3457 deletion completed in 6.086248302s • [SLOW TEST:98.009 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:32:11.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 11 13:32:11.763: INFO: Waiting up to 5m0s for pod "pod-b10a2f02-7104-4c0e-89d5-2f04da27be97" in namespace "emptydir-3599" to be "success or failure" Mar 11 13:32:11.767: INFO: Pod "pod-b10a2f02-7104-4c0e-89d5-2f04da27be97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05497ms Mar 11 13:32:13.771: INFO: Pod "pod-b10a2f02-7104-4c0e-89d5-2f04da27be97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007550871s STEP: Saw pod success Mar 11 13:32:13.771: INFO: Pod "pod-b10a2f02-7104-4c0e-89d5-2f04da27be97" satisfied condition "success or failure" Mar 11 13:32:13.773: INFO: Trying to get logs from node iruya-worker2 pod pod-b10a2f02-7104-4c0e-89d5-2f04da27be97 container test-container: STEP: delete the pod Mar 11 13:32:13.818: INFO: Waiting for pod pod-b10a2f02-7104-4c0e-89d5-2f04da27be97 to disappear Mar 11 13:32:13.824: INFO: Pod pod-b10a2f02-7104-4c0e-89d5-2f04da27be97 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:32:13.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3599" for this suite. Mar 11 13:32:19.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:32:19.901: INFO: namespace emptydir-3599 deletion completed in 6.072395416s • [SLOW TEST:8.197 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:32:19.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 13:32:25.997: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:26.004: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:28.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:28.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:30.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:30.009: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:32.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:32.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:34.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:34.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:36.005: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:36.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:38.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:38.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:40.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:40.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:42.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:42.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:44.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:44.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:46.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:46.007: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:48.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:48.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:50.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:50.009: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:52.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:52.037: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:54.004: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:54.008: INFO: Pod pod-with-poststart-exec-hook still exists Mar 11 13:32:56.005: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 11 13:32:56.031: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:32:56.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3325" for this suite. Mar 11 13:33:18.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:33:18.157: INFO: namespace container-lifecycle-hook-3325 deletion completed in 22.122633325s • [SLOW TEST:58.256 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:33:18.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:33:18.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1" in namespace "downward-api-4133" to be "success or failure" Mar 11 13:33:18.238: INFO: Pod "downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701839ms Mar 11 13:33:20.241: INFO: Pod "downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007900039s STEP: Saw pod success Mar 11 13:33:20.241: INFO: Pod "downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1" satisfied condition "success or failure" Mar 11 13:33:20.244: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1 container client-container: STEP: delete the pod Mar 11 13:33:20.278: INFO: Waiting for pod downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1 to disappear Mar 11 13:33:20.286: INFO: Pod downwardapi-volume-7add912e-f0fd-433a-b1a4-ad62d06273d1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:33:20.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4133" for this suite. Mar 11 13:33:26.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:33:26.372: INFO: namespace downward-api-4133 deletion completed in 6.083074449s • [SLOW TEST:8.214 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:33:26.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-9711/secret-test-8b9ddefb-5086-453e-8ee2-865b3b1a3464 STEP: Creating a pod to test consume secrets Mar 11 13:33:26.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4" in namespace "secrets-9711" to be "success or failure" Mar 11 13:33:26.514: INFO: Pod "pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.652316ms Mar 11 13:33:28.518: INFO: Pod "pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01241004s STEP: Saw pod success Mar 11 13:33:28.518: INFO: Pod "pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4" satisfied condition "success or failure" Mar 11 13:33:28.521: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4 container env-test: STEP: delete the pod Mar 11 13:33:28.555: INFO: Waiting for pod pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4 to disappear Mar 11 13:33:28.562: INFO: Pod pod-configmaps-19e91e1f-b8a9-45e4-a763-f718feefe7e4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:33:28.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9711" for this suite. Mar 11 13:33:34.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:33:34.633: INFO: namespace secrets-9711 deletion completed in 6.068407539s • [SLOW TEST:8.261 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:33:34.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:33:34.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb" in namespace "projected-3982" to be "success or failure" Mar 11 13:33:34.691: INFO: Pod "downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276114ms Mar 11 13:33:36.695: INFO: Pod "downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009939481s STEP: Saw pod success Mar 11 13:33:36.695: INFO: Pod "downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb" satisfied condition "success or failure" Mar 11 13:33:36.697: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb container client-container: STEP: delete the pod Mar 11 13:33:36.717: INFO: Waiting for pod downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb to disappear Mar 11 13:33:36.721: INFO: Pod downwardapi-volume-29170eec-9a73-419e-a3ce-e91ec7b3bccb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:33:36.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3982" for this suite. Mar 11 13:33:42.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:33:42.828: INFO: namespace projected-3982 deletion completed in 6.089804726s • [SLOW TEST:8.195 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:33:42.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:33:42.931: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7ceb57b2-d802-4516-8270-8f02fd16739f", Controller:(*bool)(0xc001ea87e2), BlockOwnerDeletion:(*bool)(0xc001ea87e3)}} Mar 11 13:33:42.949: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"225716f3-83fc-4e6b-883f-128ffba95f1d", Controller:(*bool)(0xc001ea8b2a), BlockOwnerDeletion:(*bool)(0xc001ea8b2b)}} Mar 11 13:33:42.956: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b899e437-996b-414d-a44d-d49fe68f7b67", Controller:(*bool)(0xc002b379d2), BlockOwnerDeletion:(*bool)(0xc002b379d3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:33:48.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1296" for this suite. Mar 11 13:33:54.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:33:54.066: INFO: namespace gc-1296 deletion completed in 6.058934293s • [SLOW TEST:11.238 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:33:54.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-d1645f3d-ed71-44f1-9d5c-0f8ed480213b STEP: Creating a pod to test consume configMaps Mar 11 13:33:54.144: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de" in namespace "projected-6211" to be "success or failure" Mar 11 13:33:54.148: INFO: Pod "pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75389ms Mar 11 13:33:56.152: INFO: Pod "pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008901826s STEP: Saw pod success Mar 11 13:33:56.152: INFO: Pod "pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de" satisfied condition "success or failure" Mar 11 13:33:56.156: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de container projected-configmap-volume-test: STEP: delete the pod Mar 11 13:33:56.209: INFO: Waiting for pod pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de to disappear Mar 11 13:33:56.221: INFO: Pod pod-projected-configmaps-49cd3050-57a2-4bea-8d73-22d624f592de no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:33:56.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6211" for this suite. Mar 11 13:34:02.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:34:02.341: INFO: namespace projected-6211 deletion completed in 6.117416324s • [SLOW TEST:8.275 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:34:02.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9832 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 13:34:02.379: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 13:34:24.522: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostName&protocol=udp&host=10.244.1.107&port=8081&tries=1'] Namespace:pod-network-test-9832 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:34:24.522: INFO: >>> kubeConfig: /root/.kube/config I0311 13:34:24.551537 6 log.go:172] (0xc001ec44d0) (0xc0022b8140) Create stream I0311 13:34:24.551563 6 log.go:172] (0xc001ec44d0) (0xc0022b8140) Stream added, broadcasting: 1 I0311 13:34:24.555078 6 log.go:172] (0xc001ec44d0) Reply frame received for 1 I0311 13:34:24.555160 6 log.go:172] (0xc001ec44d0) (0xc000a96000) Create stream I0311 13:34:24.555238 6 log.go:172] (0xc001ec44d0) (0xc000a96000) Stream added, broadcasting: 3 I0311 13:34:24.559141 6 log.go:172] (0xc001ec44d0) Reply frame received for 3 I0311 13:34:24.559179 6 log.go:172] (0xc001ec44d0) (0xc0022b81e0) Create stream I0311 13:34:24.559189 6 log.go:172] (0xc001ec44d0) (0xc0022b81e0) Stream added, broadcasting: 5 I0311 13:34:24.560258 6 log.go:172] (0xc001ec44d0) Reply frame received for 5 I0311 13:34:24.617318 6 log.go:172] (0xc001ec44d0) Data frame received for 3 I0311 13:34:24.617347 6 log.go:172] (0xc000a96000) (3) Data frame handling I0311 13:34:24.617359 6 log.go:172] (0xc000a96000) (3) Data frame sent I0311 13:34:24.618049 6 log.go:172] (0xc001ec44d0) Data frame received for 5 I0311 13:34:24.618080 6 log.go:172] (0xc0022b81e0) (5) Data frame handling I0311 13:34:24.618099 6 log.go:172] (0xc001ec44d0) Data frame received for 3 I0311 13:34:24.618108 6 log.go:172] (0xc000a96000) (3) Data frame handling I0311 13:34:24.619444 6 log.go:172] (0xc001ec44d0) Data frame received for 1 I0311 13:34:24.619479 6 log.go:172] (0xc0022b8140) (1) Data frame handling I0311 13:34:24.619501 6 log.go:172] (0xc0022b8140) (1) Data frame sent I0311 13:34:24.619517 6 log.go:172] (0xc001ec44d0) (0xc0022b8140) Stream removed, broadcasting: 1 I0311 13:34:24.619532 6 log.go:172] (0xc001ec44d0) Go away received I0311 13:34:24.619690 6 log.go:172] (0xc001ec44d0) (0xc0022b8140) Stream removed, broadcasting: 1 I0311 13:34:24.619729 6 log.go:172] (0xc001ec44d0) (0xc000a96000) Stream removed, broadcasting: 3 I0311 13:34:24.619754 6 log.go:172] (0xc001ec44d0) (0xc0022b81e0) Stream removed, broadcasting: 5 Mar 11 13:34:24.619: INFO: Waiting for endpoints: map[] Mar 11 13:34:24.623: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.251:8080/dial?request=hostName&protocol=udp&host=10.244.2.250&port=8081&tries=1'] Namespace:pod-network-test-9832 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:34:24.623: INFO: >>> kubeConfig: /root/.kube/config I0311 13:34:24.646904 6 log.go:172] (0xc001220dc0) (0xc001fa5180) Create stream I0311 13:34:24.646931 6 log.go:172] (0xc001220dc0) (0xc001fa5180) Stream added, broadcasting: 1 I0311 13:34:24.648639 6 log.go:172] (0xc001220dc0) Reply frame received for 1 I0311 13:34:24.648662 6 log.go:172] (0xc001220dc0) (0xc001fa52c0) Create stream I0311 13:34:24.648669 6 log.go:172] (0xc001220dc0) (0xc001fa52c0) Stream added, broadcasting: 3 I0311 13:34:24.649285 6 log.go:172] (0xc001220dc0) Reply frame received for 3 I0311 13:34:24.649311 6 log.go:172] (0xc001220dc0) (0xc000a96140) Create stream I0311 13:34:24.649325 6 log.go:172] (0xc001220dc0) (0xc000a96140) Stream added, broadcasting: 5 I0311 13:34:24.650667 6 log.go:172] (0xc001220dc0) Reply frame received for 5 I0311 13:34:24.713124 6 log.go:172] (0xc001220dc0) Data frame received for 3 I0311 13:34:24.713175 6 log.go:172] (0xc001fa52c0) (3) Data frame handling I0311 13:34:24.713194 6 log.go:172] (0xc001fa52c0) (3) Data frame sent I0311 13:34:24.713686 6 log.go:172] (0xc001220dc0) Data frame received for 3 I0311 13:34:24.713708 6 log.go:172] (0xc001fa52c0) (3) Data frame handling I0311 13:34:24.713765 6 log.go:172] (0xc001220dc0) Data frame received for 5 I0311 13:34:24.713789 6 log.go:172] (0xc000a96140) (5) Data frame handling I0311 13:34:24.714971 6 log.go:172] (0xc001220dc0) Data frame received for 1 I0311 13:34:24.714985 6 log.go:172] (0xc001fa5180) (1) Data frame handling I0311 13:34:24.714993 6 log.go:172] (0xc001fa5180) (1) Data frame sent I0311 13:34:24.715004 6 log.go:172] (0xc001220dc0) (0xc001fa5180) Stream removed, broadcasting: 1 I0311 13:34:24.715022 6 log.go:172] (0xc001220dc0) Go away received I0311 13:34:24.715079 6 log.go:172] (0xc001220dc0) (0xc001fa5180) Stream removed, broadcasting: 1 I0311 13:34:24.715099 6 log.go:172] (0xc001220dc0) (0xc001fa52c0) Stream removed, broadcasting: 3 I0311 13:34:24.715105 6 log.go:172] (0xc001220dc0) (0xc000a96140) Stream removed, broadcasting: 5 Mar 11 13:34:24.715: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:34:24.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9832" for this suite. Mar 11 13:34:46.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:34:46.834: INFO: namespace pod-network-test-9832 deletion completed in 22.101762961s • [SLOW TEST:44.493 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:34:46.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 11 13:34:46.899: INFO: namespace kubectl-1461 Mar 11 13:34:46.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1461' Mar 11 13:34:48.550: INFO: stderr: "" Mar 11 13:34:48.550: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 13:34:49.554: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:34:49.554: INFO: Found 0 / 1 Mar 11 13:34:50.554: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:34:50.554: INFO: Found 1 / 1 Mar 11 13:34:50.554: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 13:34:50.557: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:34:50.557: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 13:34:50.557: INFO: wait on redis-master startup in kubectl-1461 Mar 11 13:34:50.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zf4nv redis-master --namespace=kubectl-1461' Mar 11 13:34:50.662: INFO: stderr: "" Mar 11 13:34:50.662: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 13:34:49.718 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 13:34:49.718 # Server started, Redis version 3.2.12\n1:M 11 Mar 13:34:49.718 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 13:34:49.718 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 11 13:34:50.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1461' Mar 11 13:34:50.792: INFO: stderr: "" Mar 11 13:34:50.792: INFO: stdout: "service/rm2 exposed\n" Mar 11 13:34:50.796: INFO: Service rm2 in namespace kubectl-1461 found. STEP: exposing service Mar 11 13:34:52.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1461' Mar 11 13:34:52.982: INFO: stderr: "" Mar 11 13:34:52.982: INFO: stdout: "service/rm3 exposed\n" Mar 11 13:34:53.007: INFO: Service rm3 in namespace kubectl-1461 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:34:55.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1461" for this suite. Mar 11 13:35:17.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:35:17.116: INFO: namespace kubectl-1461 deletion completed in 22.09992343s • [SLOW TEST:30.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:35:17.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 11 13:35:22.217: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:35:23.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-718" for this suite. Mar 11 13:35:45.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:35:45.327: INFO: namespace replicaset-718 deletion completed in 22.092012744s • [SLOW TEST:28.210 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:35:45.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 11 13:35:45.391: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:35:48.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3018" for this suite. Mar 11 13:35:54.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:35:54.411: INFO: namespace init-container-3018 deletion completed in 6.090503207s • [SLOW TEST:9.084 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:35:54.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1155 to expose endpoints map[] Mar 11 13:35:54.519: INFO: Get endpoints failed (38.789645ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 11 13:35:55.522: INFO: successfully validated that service multi-endpoint-test in namespace services-1155 exposes endpoints map[] (1.04201965s elapsed) STEP: Creating pod pod1 in namespace services-1155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1155 to expose endpoints map[pod1:[100]] Mar 11 13:35:57.558: INFO: successfully validated that service multi-endpoint-test in namespace services-1155 exposes endpoints map[pod1:[100]] (2.031770171s elapsed) STEP: Creating pod pod2 in namespace services-1155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1155 to expose endpoints map[pod1:[100] pod2:[101]] Mar 11 13:35:59.661: INFO: successfully validated that service multi-endpoint-test in namespace services-1155 exposes endpoints map[pod1:[100] pod2:[101]] (2.099566278s elapsed) STEP: Deleting pod pod1 in namespace services-1155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1155 to expose endpoints map[pod2:[101]] Mar 11 13:36:00.696: INFO: successfully validated that service multi-endpoint-test in namespace services-1155 exposes endpoints map[pod2:[101]] (1.031164379s elapsed) STEP: Deleting pod pod2 in namespace services-1155 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1155 to expose endpoints map[] Mar 11 13:36:00.708: INFO: successfully validated that service multi-endpoint-test in namespace services-1155 exposes endpoints map[] (6.636828ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:00.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1155" for this suite. Mar 11 13:36:22.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:36:22.869: INFO: namespace services-1155 deletion completed in 22.076526618s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:28.457 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:36:22.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Mar 11 13:36:22.920: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4749" to be "success or failure" Mar 11 13:36:22.939: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.006226ms Mar 11 13:36:24.942: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022779416s STEP: Saw pod success Mar 11 13:36:24.943: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 11 13:36:24.945: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 11 13:36:24.962: INFO: Waiting for pod pod-host-path-test to disappear Mar 11 13:36:24.996: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:24.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4749" for this suite. Mar 11 13:36:31.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:36:31.099: INFO: namespace hostpath-4749 deletion completed in 6.099017583s • [SLOW TEST:8.229 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:36:31.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Mar 11 13:36:31.141: INFO: Waiting up to 5m0s for pod "var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b" in namespace "var-expansion-5511" to be "success or failure" Mar 11 13:36:31.161: INFO: Pod "var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.364947ms Mar 11 13:36:33.165: INFO: Pod "var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02385872s STEP: Saw pod success Mar 11 13:36:33.165: INFO: Pod "var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b" satisfied condition "success or failure" Mar 11 13:36:33.168: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b container dapi-container: STEP: delete the pod Mar 11 13:36:33.227: INFO: Waiting for pod var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b to disappear Mar 11 13:36:33.237: INFO: Pod var-expansion-7e538040-b91f-4104-8ee2-e6fe85f6d19b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:33.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5511" for this suite. Mar 11 13:36:39.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:36:39.318: INFO: namespace var-expansion-5511 deletion completed in 6.076766937s • [SLOW TEST:8.219 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:36:39.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-55b0271e-645d-4958-a3e2-fbc69a1aa1d6 STEP: Creating a pod to test consume secrets Mar 11 13:36:39.409: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f" in namespace "projected-2395" to be "success or failure" Mar 11 13:36:39.415: INFO: Pod "pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.34161ms Mar 11 13:36:41.419: INFO: Pod "pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009947225s STEP: Saw pod success Mar 11 13:36:41.419: INFO: Pod "pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f" satisfied condition "success or failure" Mar 11 13:36:41.422: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f container projected-secret-volume-test: STEP: delete the pod Mar 11 13:36:41.441: INFO: Waiting for pod pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f to disappear Mar 11 13:36:41.463: INFO: Pod pod-projected-secrets-c20e7c3c-4441-4f0d-8c37-04c87a08705f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:41.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2395" for this suite. Mar 11 13:36:47.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:36:47.557: INFO: namespace projected-2395 deletion completed in 6.090880531s • [SLOW TEST:8.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:36:47.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 11 13:36:47.621: INFO: Waiting up to 5m0s for pod "pod-097bddd7-4ee0-42a7-a46f-5572c26b443e" in namespace "emptydir-1708" to be "success or failure" Mar 11 13:36:47.625: INFO: Pod "pod-097bddd7-4ee0-42a7-a46f-5572c26b443e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447249ms Mar 11 13:36:49.628: INFO: Pod "pod-097bddd7-4ee0-42a7-a46f-5572c26b443e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006785629s STEP: Saw pod success Mar 11 13:36:49.628: INFO: Pod "pod-097bddd7-4ee0-42a7-a46f-5572c26b443e" satisfied condition "success or failure" Mar 11 13:36:49.631: INFO: Trying to get logs from node iruya-worker pod pod-097bddd7-4ee0-42a7-a46f-5572c26b443e container test-container: STEP: delete the pod Mar 11 13:36:49.657: INFO: Waiting for pod pod-097bddd7-4ee0-42a7-a46f-5572c26b443e to disappear Mar 11 13:36:49.667: INFO: Pod pod-097bddd7-4ee0-42a7-a46f-5572c26b443e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1708" for this suite. Mar 11 13:36:55.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:36:55.775: INFO: namespace emptydir-1708 deletion completed in 6.10556421s • [SLOW TEST:8.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:36:55.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Mar 11 13:36:58.358: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-630 pod-service-account-5a1fb179-a511-43d6-9b04-5a8acea0f7c6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 11 13:36:58.525: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-630 pod-service-account-5a1fb179-a511-43d6-9b04-5a8acea0f7c6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 11 13:36:58.685: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-630 pod-service-account-5a1fb179-a511-43d6-9b04-5a8acea0f7c6 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:36:58.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-630" for this suite. Mar 11 13:37:04.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:37:04.956: INFO: namespace svcaccounts-630 deletion completed in 6.098204453s • [SLOW TEST:9.180 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:37:04.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 11 13:37:09.119: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 13:37:09.136: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 13:37:11.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 13:37:11.140: INFO: Pod pod-with-poststart-http-hook still exists Mar 11 13:37:13.136: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 11 13:37:13.140: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:37:13.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8238" for this suite. Mar 11 13:37:35.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:37:35.229: INFO: namespace container-lifecycle-hook-8238 deletion completed in 22.084751227s • [SLOW TEST:30.272 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:37:35.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4067 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4067 STEP: Deleting pre-stop pod Mar 11 13:37:44.310: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:37:44.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4067" for this suite. Mar 11 13:38:22.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:38:22.479: INFO: namespace prestop-4067 deletion completed in 38.134267101s • [SLOW TEST:47.250 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:38:22.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:38:22.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b" in namespace "downward-api-3707" to be "success or failure" Mar 11 13:38:22.580: INFO: Pod "downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.565357ms Mar 11 13:38:24.583: INFO: Pod "downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020431145s STEP: Saw pod success Mar 11 13:38:24.583: INFO: Pod "downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b" satisfied condition "success or failure" Mar 11 13:38:24.585: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b container client-container: STEP: delete the pod Mar 11 13:38:24.612: INFO: Waiting for pod downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b to disappear Mar 11 13:38:24.621: INFO: Pod downwardapi-volume-df0d25ea-e300-476a-ad3f-a070c9dff21b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:38:24.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3707" for this suite. Mar 11 13:38:30.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:38:30.711: INFO: namespace downward-api-3707 deletion completed in 6.087354075s • [SLOW TEST:8.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:38:30.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 11 13:38:30.785: INFO: Waiting up to 5m0s for pod "downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c" in namespace "downward-api-3042" to be "success or failure" Mar 11 13:38:30.789: INFO: Pod "downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478256ms Mar 11 13:38:32.793: INFO: Pod "downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008229944s STEP: Saw pod success Mar 11 13:38:32.793: INFO: Pod "downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c" satisfied condition "success or failure" Mar 11 13:38:32.795: INFO: Trying to get logs from node iruya-worker2 pod downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c container dapi-container: STEP: delete the pod Mar 11 13:38:32.834: INFO: Waiting for pod downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c to disappear Mar 11 13:38:32.837: INFO: Pod downward-api-575c515f-c818-4bb3-90b8-8c7e3fdba18c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:38:32.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3042" for this suite. Mar 11 13:38:38.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:38:38.930: INFO: namespace downward-api-3042 deletion completed in 6.09057098s • [SLOW TEST:8.219 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:38:38.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:38:38.994: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a" in namespace "downward-api-5145" to be "success or failure" Mar 11 13:38:38.998: INFO: Pod "downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299734ms Mar 11 13:38:41.002: INFO: Pod "downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00818445s STEP: Saw pod success Mar 11 13:38:41.002: INFO: Pod "downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a" satisfied condition "success or failure" Mar 11 13:38:41.005: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a container client-container: STEP: delete the pod Mar 11 13:38:41.040: INFO: Waiting for pod downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a to disappear Mar 11 13:38:41.046: INFO: Pod downwardapi-volume-60821d18-11e7-4b38-b9b2-ad4e1df4335a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:38:41.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5145" for this suite. Mar 11 13:38:47.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:38:47.140: INFO: namespace downward-api-5145 deletion completed in 6.090741497s • [SLOW TEST:8.209 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:38:47.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 11 13:38:47.186: INFO: Waiting up to 5m0s for pod "pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d" in namespace "emptydir-4487" to be "success or failure" Mar 11 13:38:47.190: INFO: Pod "pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.294879ms Mar 11 13:38:49.193: INFO: Pod "pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007230758s STEP: Saw pod success Mar 11 13:38:49.194: INFO: Pod "pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d" satisfied condition "success or failure" Mar 11 13:38:49.196: INFO: Trying to get logs from node iruya-worker pod pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d container test-container: STEP: delete the pod Mar 11 13:38:49.216: INFO: Waiting for pod pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d to disappear Mar 11 13:38:49.220: INFO: Pod pod-f067f48c-2f6d-4448-8cdc-dc6a18636e7d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:38:49.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4487" for this suite. Mar 11 13:38:55.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:38:55.328: INFO: namespace emptydir-4487 deletion completed in 6.103067906s • [SLOW TEST:8.188 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:38:55.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:38:55.409: INFO: Creating deployment "nginx-deployment" Mar 11 13:38:55.413: INFO: Waiting for observed generation 1 Mar 11 13:38:57.451: INFO: Waiting for all required pods to come up Mar 11 13:38:57.455: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 11 13:38:59.468: INFO: Waiting for deployment "nginx-deployment" to complete Mar 11 13:38:59.473: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 11 13:38:59.478: INFO: Updating deployment nginx-deployment Mar 11 13:38:59.478: INFO: Waiting for observed generation 2 Mar 11 13:39:01.495: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 11 13:39:01.497: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 11 13:39:01.500: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 11 13:39:01.507: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 11 13:39:01.507: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 11 13:39:01.510: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 11 13:39:01.515: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 11 13:39:01.515: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 11 13:39:01.521: INFO: Updating deployment nginx-deployment Mar 11 13:39:01.521: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 11 13:39:01.534: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 11 13:39:01.555: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 11 13:39:01.663: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8669,SelfLink:/apis/apps/v1/namespaces/deployment-8669/deployments/nginx-deployment,UID:ae22e523-c266-410e-923a-a42762de27c9,ResourceVersion:550122,Generation:3,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-11 13:38:59 +0000 UTC 2020-03-11 13:38:55 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-03-11 13:39:01 +0000 UTC 2020-03-11 13:39:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 11 13:39:01.701: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8669,SelfLink:/apis/apps/v1/namespaces/deployment-8669/replicasets/nginx-deployment-55fb7cb77f,UID:b885b94b-a4b5-495e-93ff-36e94717eb87,ResourceVersion:550154,Generation:3,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ae22e523-c266-410e-923a-a42762de27c9 0xc003036cd7 0xc003036cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:39:01.701: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 11 13:39:01.701: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8669,SelfLink:/apis/apps/v1/namespaces/deployment-8669/replicasets/nginx-deployment-7b8c6f4498,UID:fbc71e34-fedf-473c-b001-ce726b03ae75,ResourceVersion:550146,Generation:3,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment ae22e523-c266-410e-923a-a42762de27c9 0xc003036da7 0xc003036da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 11 13:39:01.746: INFO: Pod "nginx-deployment-55fb7cb77f-5p5nc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5p5nc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-5p5nc,UID:bb742888-cb16-478c-918a-e9e484713d68,ResourceVersion:550081,Generation:0,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c317 0xc002f0c318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0c390} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0c3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-11 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.746: INFO: Pod "nginx-deployment-55fb7cb77f-5prw5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5prw5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-5prw5,UID:00140dda-b6a8-4555-b5a5-890ee88460b5,ResourceVersion:550142,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c480 0xc002f0c481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0c500} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0c520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.746: INFO: Pod "nginx-deployment-55fb7cb77f-8md94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8md94,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-8md94,UID:ac1e8457-b611-437f-9c17-ed211b5e89dc,ResourceVersion:550069,Generation:0,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c5a0 0xc002f0c5a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0c620} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0c640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-11 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.746: INFO: Pod "nginx-deployment-55fb7cb77f-9c8jq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9c8jq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-9c8jq,UID:80950511-b514-4aee-a76f-ec666aaeba93,ResourceVersion:550079,Generation:0,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c710 0xc002f0c711}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0c790} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0c7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-11 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-cb8ss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cb8ss,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-cb8ss,UID:f7194871-4def-4218-b859-89f5dd037c26,ResourceVersion:550073,Generation:0,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c880 0xc002f0c881}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0c900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0c920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-11 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-cztss" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cztss,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-cztss,UID:d00efbd2-119f-4d92-a379-3f26ffc2d8bf,ResourceVersion:550144,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0c9f0 0xc002f0c9f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0ca70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ca90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-grlzn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-grlzn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-grlzn,UID:0ae24bda-6dc2-492c-9f88-bc77af0c84b7,ResourceVersion:550153,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0cb10 0xc002f0cb11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0cb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0cbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-hwv48" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hwv48,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-hwv48,UID:816ca9f8-3326-43b3-b7f0-6b58d5c38014,ResourceVersion:550062,Generation:0,CreationTimestamp:2020-03-11 13:38:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0cc30 0xc002f0cc31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0ccb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ccd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:59 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-11 13:38:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-kv6qv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kv6qv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-kv6qv,UID:11f9d14e-b109-4abf-815f-d862eb5a25a8,ResourceVersion:550143,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0cda0 0xc002f0cda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0ce20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ce40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-mfh9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mfh9p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-mfh9p,UID:89354e6a-23ae-4229-ba40-43368a49844f,ResourceVersion:550114,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0cec0 0xc002f0cec1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0cf40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0cf60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.747: INFO: Pod "nginx-deployment-55fb7cb77f-sl27q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sl27q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-sl27q,UID:fe93ad18-e074-40ac-8658-268ce2c95408,ResourceVersion:550145,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0cfe0 0xc002f0cfe1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d070} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-55fb7cb77f-v2csm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v2csm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-v2csm,UID:d62f9eeb-4ab2-4824-ab94-c6139376b390,ResourceVersion:550137,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0d110 0xc002f0d111}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d190} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d1b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-55fb7cb77f-wf52r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wf52r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-55fb7cb77f-wf52r,UID:054b4c76-edc3-4b49-96a8-ad990fcc23ee,ResourceVersion:550135,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f b885b94b-a4b5-495e-93ff-36e94717eb87 0xc002f0d230 0xc002f0d231}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-7cs68" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7cs68,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-7cs68,UID:da6a1c00-99f1-4ed3-bc3a-c93a05333378,ResourceVersion:550105,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d350 0xc002f0d351}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-886s7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-886s7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-886s7,UID:d04bed29-d613-4e25-8820-5f758ebf33e7,ResourceVersion:550014,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d460 0xc002f0d461}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.120,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a33e288f49ef11cfd02d838b838805736682e9149a09e0607942c7caae71618b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-8fq9j" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8fq9j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-8fq9j,UID:35a58496-11c2-4053-978d-d47d1adeac30,ResourceVersion:550017,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d5c0 0xc002f0d5c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d630} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.121,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d660589d4e7376e5136cc2f6fa4818b535836237b1799dce9f6f7e46a46477c3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-8lz84" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8lz84,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-8lz84,UID:06efa3ac-8995-4ad8-89a6-e8a7bbde6a79,ResourceVersion:550127,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d720 0xc002f0d721}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d790} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-cs9t5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cs9t5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-cs9t5,UID:840186a4-f920-414b-a5c0-e3bd566f800b,ResourceVersion:550130,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d830 0xc002f0d831}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d8a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d8c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.748: INFO: Pod "nginx-deployment-7b8c6f4498-dswfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dswfs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-dswfs,UID:a55fd8d8-f470-4956-805c-96409b6b6c00,ResourceVersion:550140,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0d940 0xc002f0d941}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0d9c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0d9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-fqf4v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fqf4v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-fqf4v,UID:016d2e05-aed9-4bc7-9531-7b3966c79e25,ResourceVersion:550147,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0da60 0xc002f0da61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0dad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0db00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-11 13:39:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-fs5gm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fs5gm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-fs5gm,UID:788aa6f3-6bb5-41e7-9308-4296305c8983,ResourceVersion:550123,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0dbc0 0xc002f0dbc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0dc30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0dc60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-g8mh5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g8mh5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-g8mh5,UID:57aab9b3-f93f-4e36-84c9-a5d5506229cd,ResourceVersion:550152,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0dce0 0xc002f0dce1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0dd50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0dd70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-03-11 13:39:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-gkrsw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gkrsw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-gkrsw,UID:d272282c-2a1c-4569-8a98-a84731c1649d,ResourceVersion:550119,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0de40 0xc002f0de41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0deb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0ded0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-jsts8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jsts8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-jsts8,UID:d7647df9-9d99-4434-a23e-e1fd4fe64219,ResourceVersion:550010,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc002f0df50 0xc002f0df51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f0dfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f0dfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.119,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:58 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://14018dd51ac474d8919cc52545e33679435cca532205718a2212a1fc6dd6c7ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-kjs75" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kjs75,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-kjs75,UID:c7e02124-d4ea-4b6a-ab0c-4f4969618030,ResourceVersion:550139,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc0034000b0 0xc0034000b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400120} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-lmxdz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lmxdz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-lmxdz,UID:6c8d01bc-ca93-4b97-b246-17ee3c7ec071,ResourceVersion:549981,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc0034001c0 0xc0034001c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400230} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.117,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://68d68ebbb1ca3f73d77f40362f109a62985a49302f93f983ea9d03fb11587885}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.749: INFO: Pod "nginx-deployment-7b8c6f4498-mmxbg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mmxbg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-mmxbg,UID:2baf8814-429a-49ae-8cf9-1126db8ea30f,ResourceVersion:549991,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc003400320 0xc003400321}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400390} {node.kubernetes.io/unreachable Exists NoExecute 0xc0034003b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.9,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://45abd73e779bd6db1cf554c6f5439d1650d8f036242534926f3123d8b84d78b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-n7zzd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n7zzd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-n7zzd,UID:42fb48de-34fd-4c41-99ec-535cf3559d4a,ResourceVersion:550023,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc003400480 0xc003400481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0034004f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.12,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6f735e924afb7d827edcdd611c74a9608ad2f8893b645f7ed382fdcdb2fe4fc8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-nd8pg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nd8pg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-nd8pg,UID:78af62a0-4cfb-4f33-ae2a-3050b36a4d18,ResourceVersion:550126,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc0034005e0 0xc0034005e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400650} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-nvqd7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nvqd7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-nvqd7,UID:c720df51-b48a-4036-8eac-76e3a992b2db,ResourceVersion:550136,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc0034006f0 0xc0034006f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400760} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-p8q68" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p8q68,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-p8q68,UID:44f1011c-25fa-4c29-b3ef-eea7dff78574,ResourceVersion:550019,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc003400800 0xc003400801}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400870} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.10,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://20372d21216c0a1c771927610d159bfab3a66e99d236f54ef75ed48899615677}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-w24ld" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w24ld,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-w24ld,UID:f6a6ff6c-8e32-4fe5-bfc5-d854f894f3be,ResourceVersion:550138,Generation:0,CreationTimestamp:2020-03-11 13:39:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc003400960 0xc003400961}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0034009d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0034009f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:39:01 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 11 13:39:01.750: INFO: Pod "nginx-deployment-7b8c6f4498-xj86b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xj86b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8669,SelfLink:/api/v1/namespaces/deployment-8669/pods/nginx-deployment-7b8c6f4498-xj86b,UID:d19eb002-1ea7-4d93-a475-b361099d0d8c,ResourceVersion:549987,Generation:0,CreationTimestamp:2020-03-11 13:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fbc71e34-fedf-473c-b001-ce726b03ae75 0xc003400a70 0xc003400a71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rqfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rqfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-4rqfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003400ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003400b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:38:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.2.11,StartTime:2020-03-11 13:38:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-11 13:38:57 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5aaab898816dbc2887eaeb3b1379d608e9fe23d904fd6458330fc9e76d989a49}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:39:01.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8669" for this suite. Mar 11 13:39:09.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:39:09.994: INFO: namespace deployment-8669 deletion completed in 8.193910139s • [SLOW TEST:14.666 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:39:09.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 11 13:39:10.101: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:10.118: INFO: Number of nodes with available pods: 0 Mar 11 13:39:10.118: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:11.123: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:11.127: INFO: Number of nodes with available pods: 0 Mar 11 13:39:11.127: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:12.122: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:12.126: INFO: Number of nodes with available pods: 2 Mar 11 13:39:12.126: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 11 13:39:12.168: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:12.171: INFO: Number of nodes with available pods: 1 Mar 11 13:39:12.171: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:13.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:13.179: INFO: Number of nodes with available pods: 1 Mar 11 13:39:13.179: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:14.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:14.179: INFO: Number of nodes with available pods: 1 Mar 11 13:39:14.179: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:15.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:15.180: INFO: Number of nodes with available pods: 1 Mar 11 13:39:15.180: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:16.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:16.178: INFO: Number of nodes with available pods: 1 Mar 11 13:39:16.178: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:17.175: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:17.178: INFO: Number of nodes with available pods: 1 Mar 11 13:39:17.178: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:18.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:18.178: INFO: Number of nodes with available pods: 1 Mar 11 13:39:18.179: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:19.174: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:19.177: INFO: Number of nodes with available pods: 1 Mar 11 13:39:19.177: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:20.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:20.180: INFO: Number of nodes with available pods: 1 Mar 11 13:39:20.180: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:21.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:21.180: INFO: Number of nodes with available pods: 1 Mar 11 13:39:21.180: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:22.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:22.179: INFO: Number of nodes with available pods: 1 Mar 11 13:39:22.179: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:23.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:23.179: INFO: Number of nodes with available pods: 1 Mar 11 13:39:23.179: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:24.175: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:24.177: INFO: Number of nodes with available pods: 1 Mar 11 13:39:24.177: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:25.180: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:25.182: INFO: Number of nodes with available pods: 1 Mar 11 13:39:25.182: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:39:26.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:39:26.180: INFO: Number of nodes with available pods: 2 Mar 11 13:39:26.180: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1025, will wait for the garbage collector to delete the pods Mar 11 13:39:26.241: INFO: Deleting DaemonSet.extensions daemon-set took: 5.346703ms Mar 11 13:39:26.542: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.2536ms Mar 11 13:39:34.544: INFO: Number of nodes with available pods: 0 Mar 11 13:39:34.544: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 13:39:34.545: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1025/daemonsets","resourceVersion":"550532"},"items":null} Mar 11 13:39:34.547: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1025/pods","resourceVersion":"550532"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:39:34.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1025" for this suite. Mar 11 13:39:40.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:39:40.638: INFO: namespace daemonsets-1025 deletion completed in 6.085124443s • [SLOW TEST:30.644 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:39:40.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:39:42.817: INFO: Waiting up to 5m0s for pod "client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381" in namespace "pods-3567" to be "success or failure" Mar 11 13:39:42.822: INFO: Pod "client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381": Phase="Pending", Reason="", readiness=false. Elapsed: 5.515163ms Mar 11 13:39:44.825: INFO: Pod "client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381": Phase="Running", Reason="", readiness=true. Elapsed: 2.008768456s Mar 11 13:39:46.830: INFO: Pod "client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012983445s STEP: Saw pod success Mar 11 13:39:46.830: INFO: Pod "client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381" satisfied condition "success or failure" Mar 11 13:39:46.833: INFO: Trying to get logs from node iruya-worker pod client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381 container env3cont: STEP: delete the pod Mar 11 13:39:46.853: INFO: Waiting for pod client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381 to disappear Mar 11 13:39:46.863: INFO: Pod client-envvars-8ab6b3a2-624d-4802-bbad-367f17943381 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:39:46.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3567" for this suite. Mar 11 13:40:36.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:40:37.001: INFO: namespace pods-3567 deletion completed in 50.134529027s • [SLOW TEST:56.363 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:40:37.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:40:42.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1867" for this suite. Mar 11 13:41:04.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:41:04.156: INFO: namespace replication-controller-1867 deletion completed in 22.087704523s • [SLOW TEST:27.155 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:41:04.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 11 13:41:04.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:41:04.280: INFO: Number of nodes with available pods: 0 Mar 11 13:41:04.280: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:41:05.284: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:41:05.286: INFO: Number of nodes with available pods: 0 Mar 11 13:41:05.286: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:41:06.284: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:41:06.287: INFO: Number of nodes with available pods: 2 Mar 11 13:41:06.287: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 11 13:41:06.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:41:06.305: INFO: Number of nodes with available pods: 2 Mar 11 13:41:06.305: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6778, will wait for the garbage collector to delete the pods Mar 11 13:41:07.399: INFO: Deleting DaemonSet.extensions daemon-set took: 4.551733ms Mar 11 13:41:07.499: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.212705ms Mar 11 13:41:24.302: INFO: Number of nodes with available pods: 0 Mar 11 13:41:24.302: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 13:41:24.304: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6778/daemonsets","resourceVersion":"550903"},"items":null} Mar 11 13:41:24.305: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6778/pods","resourceVersion":"550903"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:41:24.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6778" for this suite. Mar 11 13:41:30.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:41:30.387: INFO: namespace daemonsets-6778 deletion completed in 6.0739646s • [SLOW TEST:26.230 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:41:30.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 13:41:34.522: INFO: DNS probes using dns-test-b4a8d429-d6a8-4207-bdad-09edecc64059 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 13:41:38.604: INFO: File wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:38.607: INFO: File jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:38.607: INFO: Lookups using dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 failed for: [wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local] Mar 11 13:41:43.612: INFO: File wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:43.615: INFO: File jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:43.615: INFO: Lookups using dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 failed for: [wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local] Mar 11 13:41:48.611: INFO: File wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:48.612: INFO: File jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:48.612: INFO: Lookups using dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 failed for: [wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local] Mar 11 13:41:53.612: INFO: File wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:53.617: INFO: File jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:53.617: INFO: Lookups using dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 failed for: [wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local] Mar 11 13:41:58.612: INFO: File wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:58.615: INFO: File jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local from pod dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 11 13:41:58.615: INFO: Lookups using dns-2956/dns-test-ceb95d11-1a53-4357-a46f-083377341d41 failed for: [wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local] Mar 11 13:42:03.614: INFO: DNS probes using dns-test-ceb95d11-1a53-4357-a46f-083377341d41 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2956.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2956.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 13:42:07.850: INFO: DNS probes using dns-test-d97ba48e-4248-4131-9c4e-504fef84d1dd succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:42:07.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2956" for this suite. Mar 11 13:42:13.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:42:14.004: INFO: namespace dns-2956 deletion completed in 6.093965177s • [SLOW TEST:43.617 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:42:14.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:42:14.069: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 11 13:42:14.075: INFO: Number of nodes with available pods: 0 Mar 11 13:42:14.075: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 11 13:42:14.115: INFO: Number of nodes with available pods: 0 Mar 11 13:42:14.116: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:15.128: INFO: Number of nodes with available pods: 0 Mar 11 13:42:15.129: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:16.119: INFO: Number of nodes with available pods: 0 Mar 11 13:42:16.119: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:17.120: INFO: Number of nodes with available pods: 1 Mar 11 13:42:17.120: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 11 13:42:17.158: INFO: Number of nodes with available pods: 1 Mar 11 13:42:17.158: INFO: Number of running nodes: 0, number of available pods: 1 Mar 11 13:42:18.170: INFO: Number of nodes with available pods: 0 Mar 11 13:42:18.170: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 11 13:42:18.189: INFO: Number of nodes with available pods: 0 Mar 11 13:42:18.189: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:19.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:19.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:20.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:20.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:21.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:21.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:22.192: INFO: Number of nodes with available pods: 0 Mar 11 13:42:22.192: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:23.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:23.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:24.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:24.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:25.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:25.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:26.193: INFO: Number of nodes with available pods: 0 Mar 11 13:42:26.193: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:42:27.193: INFO: Number of nodes with available pods: 1 Mar 11 13:42:27.193: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5865, will wait for the garbage collector to delete the pods Mar 11 13:42:27.258: INFO: Deleting DaemonSet.extensions daemon-set took: 5.113075ms Mar 11 13:42:27.558: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.234137ms Mar 11 13:42:34.361: INFO: Number of nodes with available pods: 0 Mar 11 13:42:34.362: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 13:42:34.363: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5865/daemonsets","resourceVersion":"551234"},"items":null} Mar 11 13:42:34.366: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5865/pods","resourceVersion":"551234"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:42:34.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5865" for this suite. Mar 11 13:42:40.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:42:40.689: INFO: namespace daemonsets-5865 deletion completed in 6.126299055s • [SLOW TEST:26.685 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:42:40.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:42:40.791: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e" in namespace "projected-1179" to be "success or failure" Mar 11 13:42:40.801: INFO: Pod "downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.762463ms Mar 11 13:42:42.803: INFO: Pod "downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012376541s STEP: Saw pod success Mar 11 13:42:42.803: INFO: Pod "downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e" satisfied condition "success or failure" Mar 11 13:42:42.806: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e container client-container: STEP: delete the pod Mar 11 13:42:42.828: INFO: Waiting for pod downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e to disappear Mar 11 13:42:42.830: INFO: Pod downwardapi-volume-b2785648-9d63-4f3f-9ecb-ee85ef0f866e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:42:42.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1179" for this suite. Mar 11 13:42:48.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:42:48.919: INFO: namespace projected-1179 deletion completed in 6.086072396s • [SLOW TEST:8.229 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:42:48.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 11 13:42:48.984: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:42:53.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6092" for this suite. Mar 11 13:43:15.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:43:15.351: INFO: namespace init-container-6092 deletion completed in 22.072681691s • [SLOW TEST:26.432 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:43:15.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 11 13:43:17.929: INFO: Successfully updated pod "pod-update-04cc3ea6-1050-4587-bead-d3d0ecfd623f" STEP: verifying the updated pod is in kubernetes Mar 11 13:43:17.936: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:43:17.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5011" for this suite. Mar 11 13:43:39.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:43:40.057: INFO: namespace pods-5011 deletion completed in 22.116908252s • [SLOW TEST:24.706 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:43:40.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4270.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4270.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4270.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4270.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4270.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 13:43:44.199: INFO: DNS probes using dns-4270/dns-test-a57956cc-06a5-4e1e-ace3-ad4262ec5b0e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:43:44.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4270" for this suite. Mar 11 13:43:50.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:43:50.358: INFO: namespace dns-4270 deletion completed in 6.126622259s • [SLOW TEST:10.301 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:43:50.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 13:43:50.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2219' Mar 11 13:43:50.494: INFO: stderr: "" Mar 11 13:43:50.494: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Mar 11 13:43:50.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2219' Mar 11 13:44:04.473: INFO: stderr: "" Mar 11 13:44:04.473: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:44:04.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2219" for this suite. Mar 11 13:44:10.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:44:10.565: INFO: namespace kubectl-2219 deletion completed in 6.084144343s • [SLOW TEST:20.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:44:10.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Mar 11 13:44:10.617: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 11 13:44:10.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:10.922: INFO: stderr: "" Mar 11 13:44:10.922: INFO: stdout: "service/redis-slave created\n" Mar 11 13:44:10.922: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 11 13:44:10.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:11.186: INFO: stderr: "" Mar 11 13:44:11.186: INFO: stdout: "service/redis-master created\n" Mar 11 13:44:11.187: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 11 13:44:11.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:11.466: INFO: stderr: "" Mar 11 13:44:11.466: INFO: stdout: "service/frontend created\n" Mar 11 13:44:11.467: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 11 13:44:11.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:11.698: INFO: stderr: "" Mar 11 13:44:11.698: INFO: stdout: "deployment.apps/frontend created\n" Mar 11 13:44:11.698: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 11 13:44:11.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:11.963: INFO: stderr: "" Mar 11 13:44:11.963: INFO: stdout: "deployment.apps/redis-master created\n" Mar 11 13:44:11.963: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 11 13:44:11.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2955' Mar 11 13:44:12.379: INFO: stderr: "" Mar 11 13:44:12.379: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Mar 11 13:44:12.379: INFO: Waiting for all frontend pods to be Running. Mar 11 13:44:17.429: INFO: Waiting for frontend to serve content. Mar 11 13:44:17.447: INFO: Trying to add a new entry to the guestbook. Mar 11 13:44:17.463: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 11 13:44:17.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:17.668: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:17.668: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 11 13:44:17.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:17.815: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:17.815: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 11 13:44:17.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:17.949: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:17.949: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 11 13:44:17.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:18.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:18.034: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 11 13:44:18.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:18.105: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:18.105: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 11 13:44:18.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2955' Mar 11 13:44:18.177: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 13:44:18.177: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:44:18.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2955" for this suite. Mar 11 13:44:58.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:44:58.266: INFO: namespace kubectl-2955 deletion completed in 40.086585873s • [SLOW TEST:47.701 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:44:58.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:44:58.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9078' Mar 11 13:45:00.014: INFO: stderr: "" Mar 11 13:45:00.014: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 11 13:45:00.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9078' Mar 11 13:45:00.352: INFO: stderr: "" Mar 11 13:45:00.352: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 13:45:01.356: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:45:01.356: INFO: Found 0 / 1 Mar 11 13:45:02.356: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:45:02.356: INFO: Found 1 / 1 Mar 11 13:45:02.356: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 13:45:02.358: INFO: Selector matched 1 pods for map[app:redis] Mar 11 13:45:02.358: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 13:45:02.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-m45td --namespace=kubectl-9078' Mar 11 13:45:02.441: INFO: stderr: "" Mar 11 13:45:02.441: INFO: stdout: "Name: redis-master-m45td\nNamespace: kubectl-9078\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Wed, 11 Mar 2020 13:45:00 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.150\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://30a502a056d303a93861282a96b062e782e4fabf6c6cefba5f59e6145604478a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 11 Mar 2020 13:45:01 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vmttr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vmttr:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vmttr\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-9078/redis-master-m45td to iruya-worker\n Normal Pulled 2s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker Created container redis-master\n Normal Started 1s kubelet, iruya-worker Started container redis-master\n" Mar 11 13:45:02.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9078' Mar 11 13:45:02.540: INFO: stderr: "" Mar 11 13:45:02.540: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9078\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: redis-master-m45td\n" Mar 11 13:45:02.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9078' Mar 11 13:45:02.665: INFO: stderr: "" Mar 11 13:45:02.665: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9078\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.24.65\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.150:6379\nSession Affinity: None\nEvents: \n" Mar 11 13:45:02.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Mar 11 13:45:02.761: INFO: stderr: "" Mar 11 13:45:02.762: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:39:09 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 11 Mar 2020 13:44:07 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 11 Mar 2020 13:44:07 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 11 Mar 2020 13:44:07 +0000 Sun, 08 Mar 2020 14:39:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 11 Mar 2020 13:44:07 +0000 Sun, 08 Mar 2020 14:39:40 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.8\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 02c556471391403b9d1ff5a92e24de90\n System UUID: 23c4adc2-c7ef-4117-bc7b-74afff25f445\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-f26vw 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d23h\n kube-system coredns-5d4dd4b4db-t49n4 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 2d23h\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h\n kube-system kindnet-bjxs9 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 2d23h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 2d23h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 2d23h\n kube-system kube-proxy-hfxdn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h\n local-path-storage local-path-provisioner-d4947b89c-j6x79 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2d23h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 11 13:45:02.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9078' Mar 11 13:45:02.839: INFO: stderr: "" Mar 11 13:45:02.839: INFO: stdout: "Name: kubectl-9078\nLabels: e2e-framework=kubectl\n e2e-run=6869a2d0-89c9-4ec5-9fe0-59252ed61a50\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:45:02.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9078" for this suite. Mar 11 13:45:24.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:45:24.972: INFO: namespace kubectl-9078 deletion completed in 22.131177252s • [SLOW TEST:26.706 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:45:24.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:45:25.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1309" for this suite. Mar 11 13:45:31.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:45:31.113: INFO: namespace services-1309 deletion completed in 6.080785569s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.140 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:45:31.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 13:45:35.190: INFO: DNS probes using dns-2952/dns-test-5015f8f2-0fb6-474d-811c-292dd2b81693 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:45:35.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2952" for this suite. Mar 11 13:45:41.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:45:41.323: INFO: namespace dns-2952 deletion completed in 6.100739005s • [SLOW TEST:10.210 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:45:41.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:45:41.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338" in namespace "downward-api-7172" to be "success or failure" Mar 11 13:45:41.375: INFO: Pod "downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338": Phase="Pending", Reason="", readiness=false. Elapsed: 3.510234ms Mar 11 13:45:43.380: INFO: Pod "downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008065348s Mar 11 13:45:45.387: INFO: Pod "downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015613249s STEP: Saw pod success Mar 11 13:45:45.387: INFO: Pod "downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338" satisfied condition "success or failure" Mar 11 13:45:45.389: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338 container client-container: STEP: delete the pod Mar 11 13:45:45.450: INFO: Waiting for pod downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338 to disappear Mar 11 13:45:45.453: INFO: Pod downwardapi-volume-31015a31-daeb-4647-8608-cb724c729338 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:45:45.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7172" for this suite. Mar 11 13:45:51.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:45:51.553: INFO: namespace downward-api-7172 deletion completed in 6.097361306s • [SLOW TEST:10.230 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:45:51.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:45:51.637: INFO: Create a RollingUpdate DaemonSet Mar 11 13:45:51.641: INFO: Check that daemon pods launch on every node of the cluster Mar 11 13:45:51.646: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:45:51.661: INFO: Number of nodes with available pods: 0 Mar 11 13:45:51.661: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:45:52.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:45:52.695: INFO: Number of nodes with available pods: 0 Mar 11 13:45:52.695: INFO: Node iruya-worker is running more than one daemon pod Mar 11 13:45:53.667: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:45:53.670: INFO: Number of nodes with available pods: 2 Mar 11 13:45:53.670: INFO: Number of running nodes: 2, number of available pods: 2 Mar 11 13:45:53.670: INFO: Update the DaemonSet to trigger a rollout Mar 11 13:45:53.677: INFO: Updating DaemonSet daemon-set Mar 11 13:46:04.703: INFO: Roll back the DaemonSet before rollout is complete Mar 11 13:46:04.710: INFO: Updating DaemonSet daemon-set Mar 11 13:46:04.710: INFO: Make sure DaemonSet rollback is complete Mar 11 13:46:04.714: INFO: Wrong image for pod: daemon-set-ghqtr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 11 13:46:04.714: INFO: Pod daemon-set-ghqtr is not available Mar 11 13:46:04.737: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:46:05.742: INFO: Wrong image for pod: daemon-set-ghqtr. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Mar 11 13:46:05.742: INFO: Pod daemon-set-ghqtr is not available Mar 11 13:46:05.746: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 11 13:46:06.741: INFO: Pod daemon-set-4xwwj is not available Mar 11 13:46:06.745: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5938, will wait for the garbage collector to delete the pods Mar 11 13:46:06.829: INFO: Deleting DaemonSet.extensions daemon-set took: 23.01423ms Mar 11 13:46:07.129: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233789ms Mar 11 13:46:08.632: INFO: Number of nodes with available pods: 0 Mar 11 13:46:08.632: INFO: Number of running nodes: 0, number of available pods: 0 Mar 11 13:46:08.634: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5938/daemonsets","resourceVersion":"552192"},"items":null} Mar 11 13:46:08.637: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5938/pods","resourceVersion":"552192"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:46:08.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5938" for this suite. Mar 11 13:46:14.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:46:14.733: INFO: namespace daemonsets-5938 deletion completed in 6.086992051s • [SLOW TEST:23.179 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:46:14.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a2f188b3-6fcc-434f-a8c5-da3e0dd003a0 STEP: Creating a pod to test consume configMaps Mar 11 13:46:14.822: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953" in namespace "projected-9037" to be "success or failure" Mar 11 13:46:14.825: INFO: Pod "pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2251ms Mar 11 13:46:16.828: INFO: Pod "pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006232348s STEP: Saw pod success Mar 11 13:46:16.828: INFO: Pod "pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953" satisfied condition "success or failure" Mar 11 13:46:16.831: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953 container projected-configmap-volume-test: STEP: delete the pod Mar 11 13:46:16.862: INFO: Waiting for pod pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953 to disappear Mar 11 13:46:16.867: INFO: Pod pod-projected-configmaps-f8612444-23b3-49f7-96e3-a3138cb3d953 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:46:16.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9037" for this suite. Mar 11 13:46:22.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:46:23.001: INFO: namespace projected-9037 deletion completed in 6.130438213s • [SLOW TEST:8.267 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:46:23.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2063 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Mar 11 13:46:23.061: INFO: Found 0 stateful pods, waiting for 3 Mar 11 13:46:33.065: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:46:33.065: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:46:33.065: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 11 13:46:33.089: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 11 13:46:43.154: INFO: Updating stateful set ss2 Mar 11 13:46:43.183: INFO: Waiting for Pod statefulset-2063/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 11 13:46:53.307: INFO: Found 2 stateful pods, waiting for 3 Mar 11 13:47:03.311: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:47:03.311: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 11 13:47:03.311: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 11 13:47:03.333: INFO: Updating stateful set ss2 Mar 11 13:47:03.356: INFO: Waiting for Pod statefulset-2063/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:47:13.363: INFO: Waiting for Pod statefulset-2063/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:47:23.379: INFO: Updating stateful set ss2 Mar 11 13:47:23.391: INFO: Waiting for StatefulSet statefulset-2063/ss2 to complete update Mar 11 13:47:23.391: INFO: Waiting for Pod statefulset-2063/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 11 13:47:33.395: INFO: Waiting for StatefulSet statefulset-2063/ss2 to complete update Mar 11 13:47:33.395: INFO: Waiting for Pod statefulset-2063/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 11 13:47:43.397: INFO: Deleting all statefulset in ns statefulset-2063 Mar 11 13:47:43.399: INFO: Scaling statefulset ss2 to 0 Mar 11 13:48:03.425: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 13:48:03.428: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:48:03.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2063" for this suite. Mar 11 13:48:09.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:48:09.564: INFO: namespace statefulset-2063 deletion completed in 6.118841108s • [SLOW TEST:106.563 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:48:09.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jplj STEP: Creating a pod to test atomic-volume-subpath Mar 11 13:48:09.638: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jplj" in namespace "subpath-451" to be "success or failure" Mar 11 13:48:09.642: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372102ms Mar 11 13:48:11.661: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 2.022894479s Mar 11 13:48:13.666: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 4.028091078s Mar 11 13:48:15.670: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 6.031927165s Mar 11 13:48:17.674: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 8.03573232s Mar 11 13:48:19.677: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 10.039507683s Mar 11 13:48:21.681: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 12.043664829s Mar 11 13:48:23.685: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 14.046684839s Mar 11 13:48:25.688: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 16.050565301s Mar 11 13:48:27.692: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 18.054065133s Mar 11 13:48:29.696: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 20.058292123s Mar 11 13:48:31.703: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Running", Reason="", readiness=true. Elapsed: 22.06493598s Mar 11 13:48:33.706: INFO: Pod "pod-subpath-test-configmap-jplj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068109404s STEP: Saw pod success Mar 11 13:48:33.706: INFO: Pod "pod-subpath-test-configmap-jplj" satisfied condition "success or failure" Mar 11 13:48:33.708: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-jplj container test-container-subpath-configmap-jplj: STEP: delete the pod Mar 11 13:48:33.734: INFO: Waiting for pod pod-subpath-test-configmap-jplj to disappear Mar 11 13:48:33.739: INFO: Pod pod-subpath-test-configmap-jplj no longer exists STEP: Deleting pod pod-subpath-test-configmap-jplj Mar 11 13:48:33.739: INFO: Deleting pod "pod-subpath-test-configmap-jplj" in namespace "subpath-451" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:48:33.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-451" for this suite. Mar 11 13:48:39.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:48:39.879: INFO: namespace subpath-451 deletion completed in 6.135302526s • [SLOW TEST:30.314 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:48:39.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:48:46.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3601" for this suite. Mar 11 13:48:52.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:48:52.156: INFO: namespace namespaces-3601 deletion completed in 6.08504557s STEP: Destroying namespace "nsdeletetest-7212" for this suite. Mar 11 13:48:52.158: INFO: Namespace nsdeletetest-7212 was already deleted STEP: Destroying namespace "nsdeletetest-1051" for this suite. Mar 11 13:48:58.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:48:58.253: INFO: namespace nsdeletetest-1051 deletion completed in 6.094806085s • [SLOW TEST:18.374 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:48:58.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Mar 11 13:48:58.333: INFO: Waiting up to 5m0s for pod "client-containers-87bc713b-c335-4791-b89d-eeb188600638" in namespace "containers-6609" to be "success or failure" Mar 11 13:48:58.338: INFO: Pod "client-containers-87bc713b-c335-4791-b89d-eeb188600638": Phase="Pending", Reason="", readiness=false. Elapsed: 4.627006ms Mar 11 13:49:00.342: INFO: Pod "client-containers-87bc713b-c335-4791-b89d-eeb188600638": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009093035s STEP: Saw pod success Mar 11 13:49:00.342: INFO: Pod "client-containers-87bc713b-c335-4791-b89d-eeb188600638" satisfied condition "success or failure" Mar 11 13:49:00.345: INFO: Trying to get logs from node iruya-worker2 pod client-containers-87bc713b-c335-4791-b89d-eeb188600638 container test-container: STEP: delete the pod Mar 11 13:49:00.380: INFO: Waiting for pod client-containers-87bc713b-c335-4791-b89d-eeb188600638 to disappear Mar 11 13:49:00.386: INFO: Pod client-containers-87bc713b-c335-4791-b89d-eeb188600638 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:49:00.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6609" for this suite. Mar 11 13:49:06.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:49:06.516: INFO: namespace containers-6609 deletion completed in 6.126853822s • [SLOW TEST:8.262 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:49:06.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 11 13:49:08.633: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 11 13:49:18.749: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:49:18.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2556" for this suite. Mar 11 13:49:24.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:49:24.847: INFO: namespace pods-2556 deletion completed in 6.09152985s • [SLOW TEST:18.331 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:49:24.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:49:24.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5392" for this suite. Mar 11 13:49:46.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:49:47.071: INFO: namespace pods-5392 deletion completed in 22.12023849s • [SLOW TEST:22.224 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:49:47.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 11 13:49:47.154: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 13:49:47.169: INFO: Waiting for terminating namespaces to be deleted... Mar 11 13:49:47.171: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 11 13:49:47.174: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:49:47.174: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 13:49:47.174: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:49:47.174: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 13:49:47.174: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 11 13:49:47.178: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:49:47.178: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 13:49:47.178: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 13:49:47.178: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Mar 11 13:49:47.254: INFO: Pod kindnet-9jdkr requesting resource cpu=100m on Node iruya-worker Mar 11 13:49:47.254: INFO: Pod kindnet-d7zdc requesting resource cpu=100m on Node iruya-worker2 Mar 11 13:49:47.254: INFO: Pod kube-proxy-clpmt requesting resource cpu=0m on Node iruya-worker2 Mar 11 13:49:47.254: INFO: Pod kube-proxy-nf96r requesting resource cpu=0m on Node iruya-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca.15fb441d75fc1465], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6106/filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca.15fb441daa23f929], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca.15fb441dbab8e407], Reason = [Created], Message = [Created container filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca] STEP: Considering event: Type = [Normal], Name = [filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca.15fb441dc50e56a1], Reason = [Started], Message = [Started container filler-pod-47df7a1c-237c-4ba0-87f7-337615b98cca] STEP: Considering event: Type = [Normal], Name = [filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c.15fb441d76fba213], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6106/filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c.15fb441da9d9660c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c.15fb441db891016c], Reason = [Created], Message = [Created container filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c] STEP: Considering event: Type = [Normal], Name = [filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c.15fb441dc50e5665], Reason = [Started], Message = [Started container filler-pod-f5e91750-1a71-47db-b35e-fa8d37a9915c] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fb441def1e472c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:49:50.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6106" for this suite. Mar 11 13:49:56.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:49:56.501: INFO: namespace sched-pred-6106 deletion completed in 6.100098038s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:9.429 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:49:56.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:49:56.555: INFO: Creating deployment "test-recreate-deployment" Mar 11 13:49:56.558: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 11 13:49:56.578: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 11 13:49:58.586: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 11 13:49:58.588: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 11 13:49:58.595: INFO: Updating deployment test-recreate-deployment Mar 11 13:49:58.595: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Mar 11 13:49:58.764: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-411,SelfLink:/apis/apps/v1/namespaces/deployment-411/deployments/test-recreate-deployment,UID:d3c5daf1-741b-417c-85cf-787ecf71441e,ResourceVersion:553163,Generation:2,CreationTimestamp:2020-03-11 13:49:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-11 13:49:58 +0000 UTC 2020-03-11 13:49:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-11 13:49:58 +0000 UTC 2020-03-11 13:49:56 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 11 13:49:58.767: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-411,SelfLink:/apis/apps/v1/namespaces/deployment-411/replicasets/test-recreate-deployment-5c8c9cc69d,UID:e981457f-ffa7-495e-bd84-0a8b3da83282,ResourceVersion:553160,Generation:1,CreationTimestamp:2020-03-11 13:49:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d3c5daf1-741b-417c-85cf-787ecf71441e 0xc001ae7657 0xc001ae7658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:49:58.767: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 11 13:49:58.767: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-411,SelfLink:/apis/apps/v1/namespaces/deployment-411/replicasets/test-recreate-deployment-6df85df6b9,UID:975e2f4c-a2c0-4ccf-8345-bc15ee8fae52,ResourceVersion:553152,Generation:2,CreationTimestamp:2020-03-11 13:49:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d3c5daf1-741b-417c-85cf-787ecf71441e 0xc001ae7917 0xc001ae7918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 11 13:49:58.770: INFO: Pod "test-recreate-deployment-5c8c9cc69d-plc8n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-plc8n,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-411,SelfLink:/api/v1/namespaces/deployment-411/pods/test-recreate-deployment-5c8c9cc69d-plc8n,UID:50bc01af-f847-49bf-bb71-ad29eeae7ea3,ResourceVersion:553164,Generation:0,CreationTimestamp:2020-03-11 13:49:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d e981457f-ffa7-495e-bd84-0a8b3da83282 0xc001ea9887 0xc001ea9888}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5b4pb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5b4pb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5b4pb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ea9ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ea9b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:49:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 13:49:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-03-11 13:49:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:49:58.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-411" for this suite. Mar 11 13:50:04.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:50:04.868: INFO: namespace deployment-411 deletion completed in 6.095500006s • [SLOW TEST:8.366 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:50:04.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-930e8ff8-8130-40fc-9665-d23dcf7019b8 STEP: Creating a pod to test consume secrets Mar 11 13:50:04.934: INFO: Waiting up to 5m0s for pod "pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3" in namespace "secrets-9576" to be "success or failure" Mar 11 13:50:04.951: INFO: Pod "pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.0001ms Mar 11 13:50:06.954: INFO: Pod "pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020524243s STEP: Saw pod success Mar 11 13:50:06.954: INFO: Pod "pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3" satisfied condition "success or failure" Mar 11 13:50:06.957: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3 container secret-volume-test: STEP: delete the pod Mar 11 13:50:06.977: INFO: Waiting for pod pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3 to disappear Mar 11 13:50:06.981: INFO: Pod pod-secrets-d101d681-6e27-4fb7-9c3e-a24eaf0ca4c3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:50:06.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9576" for this suite. Mar 11 13:50:12.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:50:13.093: INFO: namespace secrets-9576 deletion completed in 6.108598046s • [SLOW TEST:8.225 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:50:13.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:50:13.144: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce" in namespace "projected-1680" to be "success or failure" Mar 11 13:50:13.163: INFO: Pod "downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce": Phase="Pending", Reason="", readiness=false. Elapsed: 18.903372ms Mar 11 13:50:15.167: INFO: Pod "downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023161614s STEP: Saw pod success Mar 11 13:50:15.167: INFO: Pod "downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce" satisfied condition "success or failure" Mar 11 13:50:15.169: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce container client-container: STEP: delete the pod Mar 11 13:50:15.217: INFO: Waiting for pod downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce to disappear Mar 11 13:50:15.226: INFO: Pod downwardapi-volume-be7817bf-519a-4f5d-b458-51b191d890ce no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:50:15.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1680" for this suite. Mar 11 13:50:21.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:50:21.342: INFO: namespace projected-1680 deletion completed in 6.111896808s • [SLOW TEST:8.249 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:50:21.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-tpz7j in namespace proxy-831 I0311 13:50:21.504878 6 runners.go:180] Created replication controller with name: proxy-service-tpz7j, namespace: proxy-831, replica count: 1 I0311 13:50:22.555357 6 runners.go:180] proxy-service-tpz7j Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 13:50:23.555549 6 runners.go:180] proxy-service-tpz7j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 13:50:24.555816 6 runners.go:180] proxy-service-tpz7j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 13:50:25.556059 6 runners.go:180] proxy-service-tpz7j Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0311 13:50:26.556265 6 runners.go:180] proxy-service-tpz7j Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 13:50:26.561: INFO: setup took 5.137272135s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 11 13:50:26.574: INFO: (0) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testtest (200; 13.282168ms) Mar 11 13:50:26.574: INFO: (0) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 13.41723ms) Mar 11 13:50:26.574: INFO: (0) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 13.330237ms) Mar 11 13:50:26.574: INFO: (0) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 13.570301ms) Mar 11 13:50:26.574: INFO: (0) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 13.583299ms) Mar 11 13:50:26.575: INFO: (0) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:1080/proxy/: t... (200; 13.762132ms) Mar 11 13:50:26.575: INFO: (0) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 13.917385ms) Mar 11 13:50:26.576: INFO: (0) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 14.998677ms) Mar 11 13:50:26.576: INFO: (0) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 11.39117ms) Mar 11 13:50:26.598: INFO: (1) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 11.410653ms) Mar 11 13:50:26.598: INFO: (1) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 11.600745ms) Mar 11 13:50:26.598: INFO: (1) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: test (200; 12.068548ms) Mar 11 13:50:26.599: INFO: (1) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 12.142553ms) Mar 11 13:50:26.600: INFO: (1) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 13.551739ms) Mar 11 13:50:26.600: INFO: (1) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 13.879472ms) Mar 11 13:50:26.600: INFO: (1) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 13.868734ms) Mar 11 13:50:26.600: INFO: (1) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 13.821654ms) Mar 11 13:50:26.601: INFO: (1) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 14.649182ms) Mar 11 13:50:26.601: INFO: (1) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 14.706501ms) Mar 11 13:50:26.605: INFO: (2) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 3.632762ms) Mar 11 13:50:26.605: INFO: (2) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: t... (200; 5.196335ms) Mar 11 13:50:26.606: INFO: (2) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testtest (200; 3.247632ms) Mar 11 13:50:26.612: INFO: (3) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 3.273038ms) Mar 11 13:50:26.612: INFO: (3) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.14428ms) Mar 11 13:50:26.612: INFO: (3) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 7.890663ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 7.904396ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 8.011052ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 7.920104ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 7.999411ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 7.955105ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 8.070331ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 8.090191ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 8.179037ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 8.142436ms) Mar 11 13:50:26.617: INFO: (3) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 8.313177ms) Mar 11 13:50:26.622: INFO: (4) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.482533ms) Mar 11 13:50:26.622: INFO: (4) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 4.507151ms) Mar 11 13:50:26.622: INFO: (4) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 5.019619ms) Mar 11 13:50:26.623: INFO: (4) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 5.057246ms) Mar 11 13:50:26.623: INFO: (4) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 4.826409ms) Mar 11 13:50:26.623: INFO: (4) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 5.136803ms) Mar 11 13:50:26.623: INFO: (4) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 5.254547ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 7.579261ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 7.573416ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 7.527686ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 7.561419ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 7.603823ms) Mar 11 13:50:26.625: INFO: (4) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 7.68313ms) Mar 11 13:50:26.631: INFO: (5) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 5.749103ms) Mar 11 13:50:26.632: INFO: (5) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 6.164854ms) Mar 11 13:50:26.632: INFO: (5) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 6.324922ms) Mar 11 13:50:26.632: INFO: (5) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 6.950426ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 7.448528ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 7.605957ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 7.636971ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 7.596772ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 7.605575ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 7.674027ms) Mar 11 13:50:26.633: INFO: (5) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 3.51685ms) Mar 11 13:50:26.638: INFO: (6) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.65114ms) Mar 11 13:50:26.639: INFO: (6) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.112155ms) Mar 11 13:50:26.640: INFO: (6) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 5.099945ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 6.325957ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: test (200; 6.549262ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 6.584854ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 6.685783ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 6.660321ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 6.668555ms) Mar 11 13:50:26.641: INFO: (6) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 6.737763ms) Mar 11 13:50:26.642: INFO: (6) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 7.395874ms) Mar 11 13:50:26.644: INFO: (7) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 2.375078ms) Mar 11 13:50:26.645: INFO: (7) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 2.536412ms) Mar 11 13:50:26.645: INFO: (7) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 2.694891ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 4.523938ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: t... (200; 4.481817ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 4.639697ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.583044ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 4.603887ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 4.687186ms) Mar 11 13:50:26.647: INFO: (7) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 5.834598ms) Mar 11 13:50:26.654: INFO: (8) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 5.775858ms) Mar 11 13:50:26.654: INFO: (8) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testtest (200; 9.252106ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 9.412259ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 9.328774ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 9.366288ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 9.349677ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 9.884553ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 10.015324ms) Mar 11 13:50:26.658: INFO: (8) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 9.973014ms) Mar 11 13:50:26.660: INFO: (8) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 11.526647ms) Mar 11 13:50:26.663: INFO: (9) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 3.087988ms) Mar 11 13:50:26.664: INFO: (9) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 3.980281ms) Mar 11 13:50:26.664: INFO: (9) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 4.770563ms) Mar 11 13:50:26.665: INFO: (9) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 4.863104ms) Mar 11 13:50:26.665: INFO: (9) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 5.386658ms) Mar 11 13:50:26.674: INFO: (10) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 5.855054ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 6.255132ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 6.728924ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 6.94326ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 6.899668ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 7.081866ms) Mar 11 13:50:26.675: INFO: (10) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 7.051718ms) Mar 11 13:50:26.676: INFO: (10) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 7.197415ms) Mar 11 13:50:26.676: INFO: (10) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 7.459981ms) Mar 11 13:50:26.676: INFO: (10) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 7.705407ms) Mar 11 13:50:26.676: INFO: (10) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 7.67698ms) Mar 11 13:50:26.677: INFO: (10) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 8.192577ms) Mar 11 13:50:26.680: INFO: (11) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 5.202199ms) Mar 11 13:50:26.682: INFO: (11) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 5.445899ms) Mar 11 13:50:26.682: INFO: (11) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 5.093598ms) Mar 11 13:50:26.682: INFO: (11) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.992915ms) Mar 11 13:50:26.682: INFO: (11) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 5.406255ms) Mar 11 13:50:26.683: INFO: (11) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 5.66562ms) Mar 11 13:50:26.683: INFO: (11) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 6.208717ms) Mar 11 13:50:26.684: INFO: (11) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 6.254825ms) Mar 11 13:50:26.684: INFO: (11) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 6.520182ms) Mar 11 13:50:26.684: INFO: (11) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 6.836996ms) Mar 11 13:50:26.684: INFO: (11) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 6.344219ms) Mar 11 13:50:26.684: INFO: (11) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 6.698046ms) Mar 11 13:50:26.685: INFO: (11) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 8.290883ms) Mar 11 13:50:26.690: INFO: (12) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:1080/proxy/: t... (200; 4.948181ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 5.818511ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 5.818333ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 5.884777ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 5.965257ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 5.936121ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 5.983463ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 5.954056ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 5.960837ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 5.957078ms) Mar 11 13:50:26.691: INFO: (12) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 4.700872ms) Mar 11 13:50:26.698: INFO: (13) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testtest (200; 4.688036ms) Mar 11 13:50:26.698: INFO: (13) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 4.807128ms) Mar 11 13:50:26.698: INFO: (13) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: test (200; 1.897064ms) Mar 11 13:50:26.701: INFO: (14) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 2.291214ms) Mar 11 13:50:26.701: INFO: (14) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testt... (200; 3.875565ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.540876ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 3.841692ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 4.66805ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 3.964765ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 4.174807ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.239656ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 4.146245ms) Mar 11 13:50:26.703: INFO: (14) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 4.443318ms) Mar 11 13:50:26.704: INFO: (14) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 4.805983ms) Mar 11 13:50:26.704: INFO: (14) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 5.110094ms) Mar 11 13:50:26.704: INFO: (14) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 5.337652ms) Mar 11 13:50:26.706: INFO: (15) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testt... (200; 3.57815ms) Mar 11 13:50:26.708: INFO: (15) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 4.160869ms) Mar 11 13:50:26.708: INFO: (15) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 4.228372ms) Mar 11 13:50:26.708: INFO: (15) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.336915ms) Mar 11 13:50:26.708: INFO: (15) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 4.340691ms) Mar 11 13:50:26.708: INFO: (15) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 4.463633ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.538371ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 4.786986ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 4.925278ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 4.837846ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 4.801539ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 4.899517ms) Mar 11 13:50:26.709: INFO: (15) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 4.833559ms) Mar 11 13:50:26.713: INFO: (16) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.613291ms) Mar 11 13:50:26.713: INFO: (16) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:1080/proxy/: t... (200; 3.83556ms) Mar 11 13:50:26.713: INFO: (16) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.794482ms) Mar 11 13:50:26.713: INFO: (16) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testtest (200; 4.433339ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.517106ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 4.718653ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 4.884311ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 4.798281ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 4.737273ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.817666ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 4.853805ms) Mar 11 13:50:26.714: INFO: (16) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 4.798698ms) Mar 11 13:50:26.715: INFO: (16) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 5.580944ms) Mar 11 13:50:26.717: INFO: (17) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: t... (200; 3.571602ms) Mar 11 13:50:26.718: INFO: (17) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: testtest (200; 3.808529ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.746151ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.254933ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 4.257964ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 4.425115ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:160/proxy/: foo (200; 4.391154ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname2/proxy/: bar (200; 4.572423ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/services/http:proxy-service-tpz7j:portname1/proxy/: foo (200; 4.388584ms) Mar 11 13:50:26.719: INFO: (17) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname1/proxy/: foo (200; 4.589543ms) Mar 11 13:50:26.720: INFO: (17) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname2/proxy/: tls qux (200; 4.703474ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:1080/proxy/: t... (200; 3.220399ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.266837ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf/proxy/: test (200; 3.183475ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:462/proxy/: tls qux (200; 3.666132ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 3.723088ms) Mar 11 13:50:26.723: INFO: (18) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: testtest (200; 6.251572ms) Mar 11 13:50:26.731: INFO: (19) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:443/proxy/: t... (200; 6.644859ms) Mar 11 13:50:26.731: INFO: (19) /api/v1/namespaces/proxy-831/pods/http:proxy-service-tpz7j-xkcbf:162/proxy/: bar (200; 6.645646ms) Mar 11 13:50:26.731: INFO: (19) /api/v1/namespaces/proxy-831/services/proxy-service-tpz7j:portname2/proxy/: bar (200; 6.698591ms) Mar 11 13:50:26.731: INFO: (19) /api/v1/namespaces/proxy-831/pods/https:proxy-service-tpz7j-xkcbf:460/proxy/: tls baz (200; 6.763808ms) Mar 11 13:50:26.731: INFO: (19) /api/v1/namespaces/proxy-831/services/https:proxy-service-tpz7j:tlsportname1/proxy/: tls baz (200; 6.768727ms) Mar 11 13:50:26.732: INFO: (19) /api/v1/namespaces/proxy-831/pods/proxy-service-tpz7j-xkcbf:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-3389f538-283e-4fcc-bbd2-df14993cf45a STEP: Creating secret with name s-test-opt-upd-f8bcac55-2618-48ee-988b-f33c288bb7f6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3389f538-283e-4fcc-bbd2-df14993cf45a STEP: Updating secret s-test-opt-upd-f8bcac55-2618-48ee-988b-f33c288bb7f6 STEP: Creating secret with name s-test-opt-create-370c449c-7b41-402c-b7b1-6c8c9bfd78a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:51:45.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5315" for this suite. Mar 11 13:52:07.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:52:07.648: INFO: namespace projected-5315 deletion completed in 22.143861293s • [SLOW TEST:92.553 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:52:07.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:52:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7727" for this suite. Mar 11 13:52:13.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:52:13.882: INFO: namespace kubelet-test-7727 deletion completed in 6.107421885s • [SLOW TEST:6.233 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:52:13.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-1df5b6de-36bb-44eb-ae71-4110796294f9 STEP: Creating secret with name secret-projected-all-test-volume-b6f80d96-c13d-4621-afd8-6b2d4b91a723 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 11 13:52:13.961: INFO: Waiting up to 5m0s for pod "projected-volume-a3d33965-1106-435e-957d-778d1f5e1885" in namespace "projected-3004" to be "success or failure" Mar 11 13:52:13.979: INFO: Pod "projected-volume-a3d33965-1106-435e-957d-778d1f5e1885": Phase="Pending", Reason="", readiness=false. Elapsed: 18.533257ms Mar 11 13:52:15.982: INFO: Pod "projected-volume-a3d33965-1106-435e-957d-778d1f5e1885": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021144154s STEP: Saw pod success Mar 11 13:52:15.982: INFO: Pod "projected-volume-a3d33965-1106-435e-957d-778d1f5e1885" satisfied condition "success or failure" Mar 11 13:52:15.983: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-a3d33965-1106-435e-957d-778d1f5e1885 container projected-all-volume-test: STEP: delete the pod Mar 11 13:52:16.007: INFO: Waiting for pod projected-volume-a3d33965-1106-435e-957d-778d1f5e1885 to disappear Mar 11 13:52:16.012: INFO: Pod projected-volume-a3d33965-1106-435e-957d-778d1f5e1885 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:52:16.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3004" for this suite. Mar 11 13:52:22.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:52:22.106: INFO: namespace projected-3004 deletion completed in 6.092040438s • [SLOW TEST:8.224 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:52:22.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-ec37b822-4128-498d-b4ec-75c0bec45d5d STEP: Creating a pod to test consume secrets Mar 11 13:52:22.201: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19" in namespace "projected-9929" to be "success or failure" Mar 11 13:52:22.205: INFO: Pod "pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367949ms Mar 11 13:52:24.208: INFO: Pod "pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007472343s STEP: Saw pod success Mar 11 13:52:24.208: INFO: Pod "pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19" satisfied condition "success or failure" Mar 11 13:52:24.211: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19 container projected-secret-volume-test: STEP: delete the pod Mar 11 13:52:24.257: INFO: Waiting for pod pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19 to disappear Mar 11 13:52:24.262: INFO: Pod pod-projected-secrets-a8211650-e18a-47e9-a711-c49df9891d19 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:52:24.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9929" for this suite. Mar 11 13:52:30.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:52:30.364: INFO: namespace projected-9929 deletion completed in 6.098393344s • [SLOW TEST:8.257 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:52:30.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:52:32.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1588" for this suite. Mar 11 13:53:16.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:53:16.603: INFO: namespace kubelet-test-1588 deletion completed in 44.092515526s • [SLOW TEST:46.239 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:53:16.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 11 13:53:16.684: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553807,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 11 13:53:16.684: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553808,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 11 13:53:16.684: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553810,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 11 13:53:26.728: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553830,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 13:53:26.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553831,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 11 13:53:26.728: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1208,SelfLink:/api/v1/namespaces/watch-1208/configmaps/e2e-watch-test-label-changed,UID:57b8a587-13f6-41a0-be92-a85e035c5452,ResourceVersion:553833,Generation:0,CreationTimestamp:2020-03-11 13:53:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:53:26.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1208" for this suite. Mar 11 13:53:32.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:53:32.856: INFO: namespace watch-1208 deletion completed in 6.122898472s • [SLOW TEST:16.252 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:53:32.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:53:55.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5429" for this suite. Mar 11 13:54:01.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:54:01.534: INFO: namespace container-runtime-5429 deletion completed in 6.085625223s • [SLOW TEST:28.678 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:54:01.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8291 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 11 13:54:01.583: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 11 13:54:15.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostName&protocol=http&host=10.244.1.168&port=8080&tries=1'] Namespace:pod-network-test-8291 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:54:15.739: INFO: >>> kubeConfig: /root/.kube/config I0311 13:54:15.775868 6 log.go:172] (0xc000c564d0) (0xc001d6be00) Create stream I0311 13:54:15.775897 6 log.go:172] (0xc000c564d0) (0xc001d6be00) Stream added, broadcasting: 1 I0311 13:54:15.778554 6 log.go:172] (0xc000c564d0) Reply frame received for 1 I0311 13:54:15.778601 6 log.go:172] (0xc000c564d0) (0xc0031a30e0) Create stream I0311 13:54:15.778616 6 log.go:172] (0xc000c564d0) (0xc0031a30e0) Stream added, broadcasting: 3 I0311 13:54:15.779587 6 log.go:172] (0xc000c564d0) Reply frame received for 3 I0311 13:54:15.779632 6 log.go:172] (0xc000c564d0) (0xc001d6bf40) Create stream I0311 13:54:15.779645 6 log.go:172] (0xc000c564d0) (0xc001d6bf40) Stream added, broadcasting: 5 I0311 13:54:15.780921 6 log.go:172] (0xc000c564d0) Reply frame received for 5 I0311 13:54:15.843757 6 log.go:172] (0xc000c564d0) Data frame received for 3 I0311 13:54:15.843785 6 log.go:172] (0xc0031a30e0) (3) Data frame handling I0311 13:54:15.843798 6 log.go:172] (0xc0031a30e0) (3) Data frame sent I0311 13:54:15.844319 6 log.go:172] (0xc000c564d0) Data frame received for 5 I0311 13:54:15.844354 6 log.go:172] (0xc001d6bf40) (5) Data frame handling I0311 13:54:15.844378 6 log.go:172] (0xc000c564d0) Data frame received for 3 I0311 13:54:15.844395 6 log.go:172] (0xc0031a30e0) (3) Data frame handling I0311 13:54:15.845927 6 log.go:172] (0xc000c564d0) Data frame received for 1 I0311 13:54:15.845946 6 log.go:172] (0xc001d6be00) (1) Data frame handling I0311 13:54:15.845963 6 log.go:172] (0xc001d6be00) (1) Data frame sent I0311 13:54:15.845976 6 log.go:172] (0xc000c564d0) (0xc001d6be00) Stream removed, broadcasting: 1 I0311 13:54:15.845993 6 log.go:172] (0xc000c564d0) Go away received I0311 13:54:15.846108 6 log.go:172] (0xc000c564d0) (0xc001d6be00) Stream removed, broadcasting: 1 I0311 13:54:15.846168 6 log.go:172] (0xc000c564d0) (0xc0031a30e0) Stream removed, broadcasting: 3 I0311 13:54:15.846182 6 log.go:172] (0xc000c564d0) (0xc001d6bf40) Stream removed, broadcasting: 5 Mar 11 13:54:15.846: INFO: Waiting for endpoints: map[] Mar 11 13:54:15.849: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.54:8080/dial?request=hostName&protocol=http&host=10.244.2.53&port=8080&tries=1'] Namespace:pod-network-test-8291 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 13:54:15.849: INFO: >>> kubeConfig: /root/.kube/config I0311 13:54:15.876961 6 log.go:172] (0xc002f3e6e0) (0xc002883400) Create stream I0311 13:54:15.876992 6 log.go:172] (0xc002f3e6e0) (0xc002883400) Stream added, broadcasting: 1 I0311 13:54:15.879490 6 log.go:172] (0xc002f3e6e0) Reply frame received for 1 I0311 13:54:15.879530 6 log.go:172] (0xc002f3e6e0) (0xc0028834a0) Create stream I0311 13:54:15.879546 6 log.go:172] (0xc002f3e6e0) (0xc0028834a0) Stream added, broadcasting: 3 I0311 13:54:15.880406 6 log.go:172] (0xc002f3e6e0) Reply frame received for 3 I0311 13:54:15.880435 6 log.go:172] (0xc002f3e6e0) (0xc000686000) Create stream I0311 13:54:15.880445 6 log.go:172] (0xc002f3e6e0) (0xc000686000) Stream added, broadcasting: 5 I0311 13:54:15.881293 6 log.go:172] (0xc002f3e6e0) Reply frame received for 5 I0311 13:54:15.947296 6 log.go:172] (0xc002f3e6e0) Data frame received for 3 I0311 13:54:15.947319 6 log.go:172] (0xc0028834a0) (3) Data frame handling I0311 13:54:15.947331 6 log.go:172] (0xc0028834a0) (3) Data frame sent I0311 13:54:15.947727 6 log.go:172] (0xc002f3e6e0) Data frame received for 5 I0311 13:54:15.947748 6 log.go:172] (0xc000686000) (5) Data frame handling I0311 13:54:15.947811 6 log.go:172] (0xc002f3e6e0) Data frame received for 3 I0311 13:54:15.947827 6 log.go:172] (0xc0028834a0) (3) Data frame handling I0311 13:54:15.948963 6 log.go:172] (0xc002f3e6e0) Data frame received for 1 I0311 13:54:15.948986 6 log.go:172] (0xc002883400) (1) Data frame handling I0311 13:54:15.948993 6 log.go:172] (0xc002883400) (1) Data frame sent I0311 13:54:15.949004 6 log.go:172] (0xc002f3e6e0) (0xc002883400) Stream removed, broadcasting: 1 I0311 13:54:15.949016 6 log.go:172] (0xc002f3e6e0) Go away received I0311 13:54:15.949085 6 log.go:172] (0xc002f3e6e0) (0xc002883400) Stream removed, broadcasting: 1 I0311 13:54:15.949098 6 log.go:172] (0xc002f3e6e0) (0xc0028834a0) Stream removed, broadcasting: 3 I0311 13:54:15.949104 6 log.go:172] (0xc002f3e6e0) (0xc000686000) Stream removed, broadcasting: 5 Mar 11 13:54:15.949: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:54:15.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8291" for this suite. Mar 11 13:54:33.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:54:34.031: INFO: namespace pod-network-test-8291 deletion completed in 18.079573935s • [SLOW TEST:32.496 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:54:34.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Mar 11 13:54:34.083: INFO: Waiting up to 5m0s for pod "pod-48944eb2-e70e-49dc-8d2a-9fe65563a770" in namespace "emptydir-8112" to be "success or failure" Mar 11 13:54:34.098: INFO: Pod "pod-48944eb2-e70e-49dc-8d2a-9fe65563a770": Phase="Pending", Reason="", readiness=false. Elapsed: 15.753424ms Mar 11 13:54:36.102: INFO: Pod "pod-48944eb2-e70e-49dc-8d2a-9fe65563a770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019572243s STEP: Saw pod success Mar 11 13:54:36.102: INFO: Pod "pod-48944eb2-e70e-49dc-8d2a-9fe65563a770" satisfied condition "success or failure" Mar 11 13:54:36.105: INFO: Trying to get logs from node iruya-worker2 pod pod-48944eb2-e70e-49dc-8d2a-9fe65563a770 container test-container: STEP: delete the pod Mar 11 13:54:36.126: INFO: Waiting for pod pod-48944eb2-e70e-49dc-8d2a-9fe65563a770 to disappear Mar 11 13:54:36.129: INFO: Pod pod-48944eb2-e70e-49dc-8d2a-9fe65563a770 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:54:36.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8112" for this suite. Mar 11 13:54:42.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:54:42.229: INFO: namespace emptydir-8112 deletion completed in 6.096689378s • [SLOW TEST:8.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:54:42.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 13:54:42.263: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:54:44.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9785" for this suite. Mar 11 13:55:26.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:55:26.523: INFO: namespace pods-9785 deletion completed in 42.104921535s • [SLOW TEST:44.293 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:55:26.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2884/configmap-test-7071612c-315c-4e4f-a8b8-7d5a070327bc STEP: Creating a pod to test consume configMaps Mar 11 13:55:26.603: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d" in namespace "configmap-2884" to be "success or failure" Mar 11 13:55:26.623: INFO: Pod "pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.459381ms Mar 11 13:55:28.626: INFO: Pod "pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023146715s STEP: Saw pod success Mar 11 13:55:28.626: INFO: Pod "pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d" satisfied condition "success or failure" Mar 11 13:55:28.629: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d container env-test: STEP: delete the pod Mar 11 13:55:28.647: INFO: Waiting for pod pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d to disappear Mar 11 13:55:28.678: INFO: Pod pod-configmaps-5a2bd8df-850e-45b2-86a1-a44d2e37834d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:55:28.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2884" for this suite. Mar 11 13:55:34.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:55:34.745: INFO: namespace configmap-2884 deletion completed in 6.06411541s • [SLOW TEST:8.222 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:55:34.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 13:55:34.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e" in namespace "projected-2012" to be "success or failure" Mar 11 13:55:34.806: INFO: Pod "downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012286ms Mar 11 13:55:36.810: INFO: Pod "downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007923254s STEP: Saw pod success Mar 11 13:55:36.810: INFO: Pod "downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e" satisfied condition "success or failure" Mar 11 13:55:36.813: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e container client-container: STEP: delete the pod Mar 11 13:55:36.866: INFO: Waiting for pod downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e to disappear Mar 11 13:55:36.873: INFO: Pod downwardapi-volume-267d0021-e162-45eb-ae20-4eedbc2ac65e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:55:36.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2012" for this suite. Mar 11 13:55:42.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:55:42.942: INFO: namespace projected-2012 deletion completed in 6.066423099s • [SLOW TEST:8.197 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:55:42.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 11 13:55:43.031: INFO: Waiting up to 5m0s for pod "downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5" in namespace "downward-api-4240" to be "success or failure" Mar 11 13:55:43.036: INFO: Pod "downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.766879ms Mar 11 13:55:45.040: INFO: Pod "downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008973125s STEP: Saw pod success Mar 11 13:55:45.040: INFO: Pod "downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5" satisfied condition "success or failure" Mar 11 13:55:45.044: INFO: Trying to get logs from node iruya-worker2 pod downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5 container dapi-container: STEP: delete the pod Mar 11 13:55:45.074: INFO: Waiting for pod downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5 to disappear Mar 11 13:55:45.084: INFO: Pod downward-api-292fb4b0-0a5f-4a2b-8530-a7af1d9cb7d5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:55:45.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4240" for this suite. Mar 11 13:55:51.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:55:51.177: INFO: namespace downward-api-4240 deletion completed in 6.089855263s • [SLOW TEST:8.235 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:55:51.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 11 13:55:51.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7869,SelfLink:/api/v1/namespaces/watch-7869/configmaps/e2e-watch-test-resource-version,UID:19a7c959-3966-40d8-88ac-b7f5d403c222,ResourceVersion:554379,Generation:0,CreationTimestamp:2020-03-11 13:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 11 13:55:51.297: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7869,SelfLink:/api/v1/namespaces/watch-7869/configmaps/e2e-watch-test-resource-version,UID:19a7c959-3966-40d8-88ac-b7f5d403c222,ResourceVersion:554380,Generation:0,CreationTimestamp:2020-03-11 13:55:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:55:51.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7869" for this suite. Mar 11 13:55:57.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:55:57.423: INFO: namespace watch-7869 deletion completed in 6.110878832s • [SLOW TEST:6.246 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:55:57.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-87sj STEP: Creating a pod to test atomic-volume-subpath Mar 11 13:55:57.535: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-87sj" in namespace "subpath-6310" to be "success or failure" Mar 11 13:55:57.539: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246487ms Mar 11 13:55:59.542: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007076967s Mar 11 13:56:01.545: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 4.009832594s Mar 11 13:56:03.561: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 6.025968652s Mar 11 13:56:05.565: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 8.029661989s Mar 11 13:56:07.568: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 10.033126152s Mar 11 13:56:09.571: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 12.035631606s Mar 11 13:56:11.573: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 14.038246385s Mar 11 13:56:13.576: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 16.040984513s Mar 11 13:56:15.580: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 18.044719304s Mar 11 13:56:17.584: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Running", Reason="", readiness=true. Elapsed: 20.048572675s Mar 11 13:56:19.586: INFO: Pod "pod-subpath-test-downwardapi-87sj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.05142698s STEP: Saw pod success Mar 11 13:56:19.587: INFO: Pod "pod-subpath-test-downwardapi-87sj" satisfied condition "success or failure" Mar 11 13:56:19.589: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-87sj container test-container-subpath-downwardapi-87sj: STEP: delete the pod Mar 11 13:56:19.629: INFO: Waiting for pod pod-subpath-test-downwardapi-87sj to disappear Mar 11 13:56:19.633: INFO: Pod pod-subpath-test-downwardapi-87sj no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-87sj Mar 11 13:56:19.633: INFO: Deleting pod "pod-subpath-test-downwardapi-87sj" in namespace "subpath-6310" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:56:19.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6310" for this suite. Mar 11 13:56:25.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:56:25.740: INFO: namespace subpath-6310 deletion completed in 6.103992723s • [SLOW TEST:28.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:56:25.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:57:25.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8669" for this suite. Mar 11 13:57:47.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:57:47.908: INFO: namespace container-probe-8669 deletion completed in 22.094573112s • [SLOW TEST:82.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:57:47.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0311 13:57:48.616793 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 13:57:48.616: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:57:48.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6545" for this suite. Mar 11 13:57:54.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:57:54.705: INFO: namespace gc-6545 deletion completed in 6.086023454s • [SLOW TEST:6.797 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:57:54.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 11 13:57:54.758: INFO: Waiting up to 5m0s for pod "pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19" in namespace "emptydir-4240" to be "success or failure" Mar 11 13:57:54.766: INFO: Pod "pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19": Phase="Pending", Reason="", readiness=false. Elapsed: 7.352005ms Mar 11 13:57:56.770: INFO: Pod "pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011311675s STEP: Saw pod success Mar 11 13:57:56.770: INFO: Pod "pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19" satisfied condition "success or failure" Mar 11 13:57:56.774: INFO: Trying to get logs from node iruya-worker pod pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19 container test-container: STEP: delete the pod Mar 11 13:57:56.820: INFO: Waiting for pod pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19 to disappear Mar 11 13:57:56.824: INFO: Pod pod-a4b71587-b8ba-4444-bfb8-2d5d0b591c19 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:57:56.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4240" for this suite. Mar 11 13:58:02.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:02.917: INFO: namespace emptydir-4240 deletion completed in 6.089389156s • [SLOW TEST:8.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:02.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 11 13:58:02.984: INFO: Waiting up to 5m0s for pod "downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1" in namespace "downward-api-3652" to be "success or failure" Mar 11 13:58:03.001: INFO: Pod "downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.909012ms Mar 11 13:58:05.005: INFO: Pod "downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020965934s STEP: Saw pod success Mar 11 13:58:05.005: INFO: Pod "downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1" satisfied condition "success or failure" Mar 11 13:58:05.008: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1 container dapi-container: STEP: delete the pod Mar 11 13:58:05.036: INFO: Waiting for pod downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1 to disappear Mar 11 13:58:05.053: INFO: Pod downward-api-c6d1cf41-c5e2-40ff-8f23-c799795214a1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:05.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3652" for this suite. Mar 11 13:58:11.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:11.145: INFO: namespace downward-api-3652 deletion completed in 6.088432213s • [SLOW TEST:8.229 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:11.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c409c449-dce0-476e-a123-372f6b090dac STEP: Creating a pod to test consume secrets Mar 11 13:58:11.224: INFO: Waiting up to 5m0s for pod "pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e" in namespace "secrets-6896" to be "success or failure" Mar 11 13:58:11.234: INFO: Pod "pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.287989ms Mar 11 13:58:13.238: INFO: Pod "pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013745633s STEP: Saw pod success Mar 11 13:58:13.238: INFO: Pod "pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e" satisfied condition "success or failure" Mar 11 13:58:13.240: INFO: Trying to get logs from node iruya-worker pod pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e container secret-volume-test: STEP: delete the pod Mar 11 13:58:13.254: INFO: Waiting for pod pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e to disappear Mar 11 13:58:13.258: INFO: Pod pod-secrets-6920ae2a-9ae8-481c-a7fd-41b1fff6234e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:13.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6896" for this suite. Mar 11 13:58:19.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:19.345: INFO: namespace secrets-6896 deletion completed in 6.084202881s • [SLOW TEST:8.199 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:19.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 11 13:58:21.408: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:21.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-490" for this suite. Mar 11 13:58:27.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:27.540: INFO: namespace container-runtime-490 deletion completed in 6.089304485s • [SLOW TEST:8.195 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:27.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 11 13:58:27.604: INFO: Waiting up to 5m0s for pod "pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c" in namespace "emptydir-2469" to be "success or failure" Mar 11 13:58:27.628: INFO: Pod "pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.242282ms Mar 11 13:58:29.632: INFO: Pod "pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027585764s STEP: Saw pod success Mar 11 13:58:29.632: INFO: Pod "pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c" satisfied condition "success or failure" Mar 11 13:58:29.634: INFO: Trying to get logs from node iruya-worker pod pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c container test-container: STEP: delete the pod Mar 11 13:58:29.677: INFO: Waiting for pod pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c to disappear Mar 11 13:58:29.704: INFO: Pod pod-4b3ac6af-3ffb-4d33-a3f2-30451245fe6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:29.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2469" for this suite. Mar 11 13:58:35.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:35.807: INFO: namespace emptydir-2469 deletion completed in 6.099192529s • [SLOW TEST:8.267 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:35.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Mar 11 13:58:35.943: INFO: Waiting up to 5m0s for pod "var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960" in namespace "var-expansion-5283" to be "success or failure" Mar 11 13:58:35.947: INFO: Pod "var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681626ms Mar 11 13:58:37.950: INFO: Pod "var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006906401s STEP: Saw pod success Mar 11 13:58:37.950: INFO: Pod "var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960" satisfied condition "success or failure" Mar 11 13:58:37.952: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960 container dapi-container: STEP: delete the pod Mar 11 13:58:38.020: INFO: Waiting for pod var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960 to disappear Mar 11 13:58:38.037: INFO: Pod var-expansion-071fc3c9-3f25-4449-beba-b9fe2bc06960 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:38.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5283" for this suite. Mar 11 13:58:44.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:58:44.125: INFO: namespace var-expansion-5283 deletion completed in 6.085035328s • [SLOW TEST:8.317 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:58:44.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0311 13:58:54.257700 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 13:58:54.257: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:58:54.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2467" for this suite. Mar 11 13:59:00.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:59:00.355: INFO: namespace gc-2467 deletion completed in 6.09387549s • [SLOW TEST:16.229 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:59:00.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4c395d1d-c31a-49de-91d0-14826e21c26d STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4c395d1d-c31a-49de-91d0-14826e21c26d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:59:04.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3420" for this suite. Mar 11 13:59:26.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:59:26.581: INFO: namespace configmap-3420 deletion completed in 22.110866612s • [SLOW TEST:26.226 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:59:26.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 11 13:59:31.170: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e941cc81-40b5-4a69-822d-4bf31b25465f" Mar 11 13:59:31.170: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e941cc81-40b5-4a69-822d-4bf31b25465f" in namespace "pods-341" to be "terminated due to deadline exceeded" Mar 11 13:59:31.182: INFO: Pod "pod-update-activedeadlineseconds-e941cc81-40b5-4a69-822d-4bf31b25465f": Phase="Running", Reason="", readiness=true. Elapsed: 12.467205ms Mar 11 13:59:33.187: INFO: Pod "pod-update-activedeadlineseconds-e941cc81-40b5-4a69-822d-4bf31b25465f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016781411s Mar 11 13:59:33.187: INFO: Pod "pod-update-activedeadlineseconds-e941cc81-40b5-4a69-822d-4bf31b25465f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:59:33.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-341" for this suite. Mar 11 13:59:39.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:59:39.287: INFO: namespace pods-341 deletion completed in 6.096412358s • [SLOW TEST:12.706 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:59:39.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 11 13:59:39.339: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 11 13:59:46.393: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:59:46.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5764" for this suite. Mar 11 13:59:52.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 13:59:52.490: INFO: namespace pods-5764 deletion completed in 6.090100064s • [SLOW TEST:13.202 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 13:59:52.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 11 13:59:52.550: INFO: Waiting up to 5m0s for pod "pod-8236e314-d65e-4003-b853-8128ae3b4c43" in namespace "emptydir-4502" to be "success or failure" Mar 11 13:59:52.557: INFO: Pod "pod-8236e314-d65e-4003-b853-8128ae3b4c43": Phase="Pending", Reason="", readiness=false. Elapsed: 7.127066ms Mar 11 13:59:54.560: INFO: Pod "pod-8236e314-d65e-4003-b853-8128ae3b4c43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010798048s STEP: Saw pod success Mar 11 13:59:54.561: INFO: Pod "pod-8236e314-d65e-4003-b853-8128ae3b4c43" satisfied condition "success or failure" Mar 11 13:59:54.563: INFO: Trying to get logs from node iruya-worker pod pod-8236e314-d65e-4003-b853-8128ae3b4c43 container test-container: STEP: delete the pod Mar 11 13:59:54.612: INFO: Waiting for pod pod-8236e314-d65e-4003-b853-8128ae3b4c43 to disappear Mar 11 13:59:54.617: INFO: Pod pod-8236e314-d65e-4003-b853-8128ae3b4c43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 13:59:54.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4502" for this suite. Mar 11 14:00:00.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:00:00.716: INFO: namespace emptydir-4502 deletion completed in 6.094641094s • [SLOW TEST:8.225 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:00:00.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 14:00:00.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 11 14:00:00.922: INFO: stderr: "" Mar 11 14:00:00.923: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-09T11:07:06Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:00:00.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7683" for this suite. Mar 11 14:00:06.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:00:07.017: INFO: namespace kubectl-7683 deletion completed in 6.089902032s • [SLOW TEST:6.301 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:00:07.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cf6ce5cb-f579-41bb-9521-e6fbdf5b4726 STEP: Creating a pod to test consume secrets Mar 11 14:00:07.309: INFO: Waiting up to 5m0s for pod "pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922" in namespace "secrets-6230" to be "success or failure" Mar 11 14:00:07.314: INFO: Pod "pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13516ms Mar 11 14:00:09.317: INFO: Pod "pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922": Phase="Running", Reason="", readiness=true. Elapsed: 2.007162182s Mar 11 14:00:11.320: INFO: Pod "pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010252607s STEP: Saw pod success Mar 11 14:00:11.320: INFO: Pod "pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922" satisfied condition "success or failure" Mar 11 14:00:11.322: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922 container secret-volume-test: STEP: delete the pod Mar 11 14:00:11.339: INFO: Waiting for pod pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922 to disappear Mar 11 14:00:11.361: INFO: Pod pod-secrets-daf11873-d652-428a-8ec0-1a0c478b0922 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:00:11.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6230" for this suite. Mar 11 14:00:17.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:00:17.452: INFO: namespace secrets-6230 deletion completed in 6.086628294s STEP: Destroying namespace "secret-namespace-4307" for this suite. Mar 11 14:00:23.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:00:23.512: INFO: namespace secret-namespace-4307 deletion completed in 6.060762745s • [SLOW TEST:16.495 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:00:23.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6505.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.78.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.78.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.78.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.78.39_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6505.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6505.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6505.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6505.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6505.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.78.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.78.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.78.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.78.39_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 11 14:00:27.698: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.701: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.704: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.727: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.730: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.733: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:27.751: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:32.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:32.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:32.785: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:32.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:32.807: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:37.756: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:37.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:37.783: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:37.785: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:37.806: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:42.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:42.757: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:42.780: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:42.784: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:42.803: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:47.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:47.759: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:47.788: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:47.790: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:47.811: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:52.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:52.766: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:52.794: INFO: Unable to read jessie_udp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:52.796: INFO: Unable to read jessie_tcp@dns-test-service.dns-6505.svc.cluster.local from pod dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9: the server could not find the requested resource (get pods dns-test-6894a990-2281-4eac-bc5d-139d365737a9) Mar 11 14:00:52.817: INFO: Lookups using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 failed for: [wheezy_udp@dns-test-service.dns-6505.svc.cluster.local wheezy_tcp@dns-test-service.dns-6505.svc.cluster.local jessie_udp@dns-test-service.dns-6505.svc.cluster.local jessie_tcp@dns-test-service.dns-6505.svc.cluster.local] Mar 11 14:00:57.818: INFO: DNS probes using dns-6505/dns-test-6894a990-2281-4eac-bc5d-139d365737a9 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:00:58.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6505" for this suite. Mar 11 14:01:04.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:01:04.113: INFO: namespace dns-6505 deletion completed in 6.085882592s • [SLOW TEST:40.601 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:01:04.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 11 14:01:07.300: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0331272a-b38d-45f0-9c02-b7b9d6f8208e,GenerateName:,Namespace:events-262,SelfLink:/api/v1/namespaces/events-262/pods/send-events-0331272a-b38d-45f0-9c02-b7b9d6f8208e,UID:a046ac64-02f0-43ba-b347-7014d7cfa806,ResourceVersion:555509,Generation:0,CreationTimestamp:2020-03-11 14:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 279482023,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c6xbf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c6xbf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-c6xbf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0034e1060} {node.kubernetes.io/unreachable Exists NoExecute 0xc0034e1080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 14:01:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 14:01:06 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 14:01:06 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-11 14:01:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.1.182,StartTime:2020-03-11 14:01:05 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-11 14:01:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://89a8033692ed154756c6a01c60be11f50251120b91e03edbb436f91e8d1e64aa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 11 14:01:09.312: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 11 14:01:11.316: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:01:11.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-262" for this suite. Mar 11 14:01:49.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:01:49.414: INFO: namespace events-262 deletion completed in 38.088484264s • [SLOW TEST:45.301 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:01:49.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a8616b8e-bf40-444b-b780-53f9cfe85b08 STEP: Creating a pod to test consume configMaps Mar 11 14:01:49.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2" in namespace "configmap-4323" to be "success or failure" Mar 11 14:01:49.508: INFO: Pod "pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.897712ms Mar 11 14:01:51.511: INFO: Pod "pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010675362s STEP: Saw pod success Mar 11 14:01:51.511: INFO: Pod "pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2" satisfied condition "success or failure" Mar 11 14:01:51.515: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2 container configmap-volume-test: STEP: delete the pod Mar 11 14:01:51.544: INFO: Waiting for pod pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2 to disappear Mar 11 14:01:51.549: INFO: Pod pod-configmaps-31a6faca-a045-49c0-88c6-ae612c1cfee2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:01:51.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4323" for this suite. Mar 11 14:01:57.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:01:57.636: INFO: namespace configmap-4323 deletion completed in 6.082520554s • [SLOW TEST:8.221 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:01:57.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Mar 11 14:01:57.707: INFO: Waiting up to 5m0s for pod "var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1" in namespace "var-expansion-5483" to be "success or failure" Mar 11 14:01:57.725: INFO: Pod "var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.078646ms Mar 11 14:01:59.729: INFO: Pod "var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021952182s STEP: Saw pod success Mar 11 14:01:59.729: INFO: Pod "var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1" satisfied condition "success or failure" Mar 11 14:01:59.732: INFO: Trying to get logs from node iruya-worker pod var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1 container dapi-container: STEP: delete the pod Mar 11 14:01:59.762: INFO: Waiting for pod var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1 to disappear Mar 11 14:01:59.786: INFO: Pod var-expansion-cbb59e0f-245b-4533-b08c-91256332d9e1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:01:59.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5483" for this suite. Mar 11 14:02:05.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:02:05.880: INFO: namespace var-expansion-5483 deletion completed in 6.090404405s • [SLOW TEST:8.244 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:02:05.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:02:30.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4897" for this suite. Mar 11 14:02:36.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:02:36.138: INFO: namespace namespaces-4897 deletion completed in 6.064063731s STEP: Destroying namespace "nsdeletetest-4472" for this suite. Mar 11 14:02:36.140: INFO: Namespace nsdeletetest-4472 was already deleted STEP: Destroying namespace "nsdeletetest-1390" for this suite. Mar 11 14:02:42.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:02:42.223: INFO: namespace nsdeletetest-1390 deletion completed in 6.083699811s • [SLOW TEST:36.343 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:02:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 14:02:42.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1828' Mar 11 14:02:43.798: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 14:02:43.798: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 11 14:02:43.827: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 11 14:02:43.835: INFO: scanned /root for discovery docs: Mar 11 14:02:43.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1828' Mar 11 14:02:59.759: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 11 14:02:59.759: INFO: stdout: "Created e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf\nScaling up e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 11 14:02:59.759: INFO: stdout: "Created e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf\nScaling up e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 11 14:02:59.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1828' Mar 11 14:02:59.874: INFO: stderr: "" Mar 11 14:02:59.874: INFO: stdout: "e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf-qwr6j " Mar 11 14:02:59.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf-qwr6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1828' Mar 11 14:02:59.976: INFO: stderr: "" Mar 11 14:02:59.976: INFO: stdout: "true" Mar 11 14:02:59.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf-qwr6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1828' Mar 11 14:03:00.060: INFO: stderr: "" Mar 11 14:03:00.060: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 11 14:03:00.060: INFO: e2e-test-nginx-rc-1b3259479e41f83b29f2d473cafcffdf-qwr6j is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Mar 11 14:03:00.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1828' Mar 11 14:03:00.132: INFO: stderr: "" Mar 11 14:03:00.132: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:03:00.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1828" for this suite. Mar 11 14:03:06.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:03:06.235: INFO: namespace kubectl-1828 deletion completed in 6.099052509s • [SLOW TEST:24.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:03:06.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3242 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3242 to expose endpoints map[] Mar 11 14:03:06.323: INFO: Get endpoints failed (6.781446ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 11 14:03:07.326: INFO: successfully validated that service endpoint-test2 in namespace services-3242 exposes endpoints map[] (1.010193677s elapsed) STEP: Creating pod pod1 in namespace services-3242 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3242 to expose endpoints map[pod1:[80]] Mar 11 14:03:09.368: INFO: successfully validated that service endpoint-test2 in namespace services-3242 exposes endpoints map[pod1:[80]] (2.035713655s elapsed) STEP: Creating pod pod2 in namespace services-3242 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3242 to expose endpoints map[pod1:[80] pod2:[80]] Mar 11 14:03:11.397: INFO: successfully validated that service endpoint-test2 in namespace services-3242 exposes endpoints map[pod1:[80] pod2:[80]] (2.025644148s elapsed) STEP: Deleting pod pod1 in namespace services-3242 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3242 to expose endpoints map[pod2:[80]] Mar 11 14:03:12.440: INFO: successfully validated that service endpoint-test2 in namespace services-3242 exposes endpoints map[pod2:[80]] (1.039623554s elapsed) STEP: Deleting pod pod2 in namespace services-3242 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3242 to expose endpoints map[] Mar 11 14:03:13.467: INFO: successfully validated that service endpoint-test2 in namespace services-3242 exposes endpoints map[] (1.022205861s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:03:13.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3242" for this suite. Mar 11 14:03:19.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:03:19.624: INFO: namespace services-3242 deletion completed in 6.127014984s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:13.389 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:03:19.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7858, will wait for the garbage collector to delete the pods Mar 11 14:03:21.740: INFO: Deleting Job.batch foo took: 5.299289ms Mar 11 14:03:22.040: INFO: Terminating Job.batch foo pods took: 300.270569ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:03:55.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7858" for this suite. Mar 11 14:04:01.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:04:01.266: INFO: namespace job-7858 deletion completed in 6.119137153s • [SLOW TEST:41.642 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:04:01.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 11 14:04:03.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-2300ed00-998c-4f88-b854-d99fd0b1b0b9 -c busybox-main-container --namespace=emptydir-3736 -- cat /usr/share/volumeshare/shareddata.txt' Mar 11 14:04:03.527: INFO: stderr: "I0311 14:04:03.475688 3102 log.go:172] (0xc0009b6370) (0xc0008aa780) Create stream\nI0311 14:04:03.475729 3102 log.go:172] (0xc0009b6370) (0xc0008aa780) Stream added, broadcasting: 1\nI0311 14:04:03.477573 3102 log.go:172] (0xc0009b6370) Reply frame received for 1\nI0311 14:04:03.477598 3102 log.go:172] (0xc0009b6370) (0xc000988000) Create stream\nI0311 14:04:03.477609 3102 log.go:172] (0xc0009b6370) (0xc000988000) Stream added, broadcasting: 3\nI0311 14:04:03.478270 3102 log.go:172] (0xc0009b6370) Reply frame received for 3\nI0311 14:04:03.478287 3102 log.go:172] (0xc0009b6370) (0xc0009880a0) Create stream\nI0311 14:04:03.478293 3102 log.go:172] (0xc0009b6370) (0xc0009880a0) Stream added, broadcasting: 5\nI0311 14:04:03.478950 3102 log.go:172] (0xc0009b6370) Reply frame received for 5\nI0311 14:04:03.523610 3102 log.go:172] (0xc0009b6370) Data frame received for 5\nI0311 14:04:03.523634 3102 log.go:172] (0xc0009880a0) (5) Data frame handling\nI0311 14:04:03.523651 3102 log.go:172] (0xc0009b6370) Data frame received for 3\nI0311 14:04:03.523657 3102 log.go:172] (0xc000988000) (3) Data frame handling\nI0311 14:04:03.523665 3102 log.go:172] (0xc000988000) (3) Data frame sent\nI0311 14:04:03.523671 3102 log.go:172] (0xc0009b6370) Data frame received for 3\nI0311 14:04:03.523676 3102 log.go:172] (0xc000988000) (3) Data frame handling\nI0311 14:04:03.524513 3102 log.go:172] (0xc0009b6370) Data frame received for 1\nI0311 14:04:03.524528 3102 log.go:172] (0xc0008aa780) (1) Data frame handling\nI0311 14:04:03.524536 3102 log.go:172] (0xc0008aa780) (1) Data frame sent\nI0311 14:04:03.524564 3102 log.go:172] (0xc0009b6370) (0xc0008aa780) Stream removed, broadcasting: 1\nI0311 14:04:03.524576 3102 log.go:172] (0xc0009b6370) Go away received\nI0311 14:04:03.524786 3102 log.go:172] (0xc0009b6370) (0xc0008aa780) Stream removed, broadcasting: 1\nI0311 14:04:03.524804 3102 log.go:172] (0xc0009b6370) (0xc000988000) Stream removed, broadcasting: 3\nI0311 14:04:03.524810 3102 log.go:172] (0xc0009b6370) (0xc0009880a0) Stream removed, broadcasting: 5\n" Mar 11 14:04:03.527: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:04:03.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3736" for this suite. Mar 11 14:04:09.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:04:09.658: INFO: namespace emptydir-3736 deletion completed in 6.127972593s • [SLOW TEST:8.391 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:04:09.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4dec3b0d-8a7e-45ac-bb24-12f3be576910 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4dec3b0d-8a7e-45ac-bb24-12f3be576910 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:04:13.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8895" for this suite. Mar 11 14:04:35.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:04:35.912: INFO: namespace projected-8895 deletion completed in 22.112912428s • [SLOW TEST:26.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:04:35.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 14:04:36.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025" in namespace "projected-9288" to be "success or failure" Mar 11 14:04:36.008: INFO: Pod "downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025": Phase="Pending", Reason="", readiness=false. Elapsed: 7.594987ms Mar 11 14:04:38.010: INFO: Pod "downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010425412s Mar 11 14:04:40.017: INFO: Pod "downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016661856s STEP: Saw pod success Mar 11 14:04:40.017: INFO: Pod "downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025" satisfied condition "success or failure" Mar 11 14:04:40.019: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025 container client-container: STEP: delete the pod Mar 11 14:04:40.051: INFO: Waiting for pod downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025 to disappear Mar 11 14:04:40.067: INFO: Pod downwardapi-volume-e4763e96-740f-4c22-8f10-23b364789025 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:04:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9288" for this suite. Mar 11 14:04:46.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:04:46.153: INFO: namespace projected-9288 deletion completed in 6.083828623s • [SLOW TEST:10.241 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:04:46.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:04:50.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2863" for this suite. Mar 11 14:04:56.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:04:56.346: INFO: namespace kubelet-test-2863 deletion completed in 6.089652539s • [SLOW TEST:10.193 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:04:56.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Mar 11 14:04:56.440: INFO: Waiting up to 5m0s for pod "downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985" in namespace "downward-api-5765" to be "success or failure" Mar 11 14:04:56.457: INFO: Pod "downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985": Phase="Pending", Reason="", readiness=false. Elapsed: 17.797405ms Mar 11 14:04:58.460: INFO: Pod "downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02061993s STEP: Saw pod success Mar 11 14:04:58.460: INFO: Pod "downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985" satisfied condition "success or failure" Mar 11 14:04:58.462: INFO: Trying to get logs from node iruya-worker2 pod downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985 container dapi-container: STEP: delete the pod Mar 11 14:04:58.492: INFO: Waiting for pod downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985 to disappear Mar 11 14:04:58.498: INFO: Pod downward-api-9b5a6e71-02d8-44a4-84a9-beb44b73c985 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:04:58.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5765" for this suite. Mar 11 14:05:04.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:05:04.590: INFO: namespace downward-api-5765 deletion completed in 6.089599966s • [SLOW TEST:8.243 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:05:04.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-8831/configmap-test-92321b18-a4a5-4d1a-b21a-72282b3183f2 STEP: Creating a pod to test consume configMaps Mar 11 14:05:04.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63" in namespace "configmap-8831" to be "success or failure" Mar 11 14:05:04.673: INFO: Pod "pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051021ms Mar 11 14:05:06.677: INFO: Pod "pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007859273s STEP: Saw pod success Mar 11 14:05:06.677: INFO: Pod "pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63" satisfied condition "success or failure" Mar 11 14:05:06.679: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63 container env-test: STEP: delete the pod Mar 11 14:05:06.698: INFO: Waiting for pod pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63 to disappear Mar 11 14:05:06.703: INFO: Pod pod-configmaps-a927054a-c934-4b94-825e-c477469e6b63 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:05:06.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8831" for this suite. Mar 11 14:05:12.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:05:12.811: INFO: namespace configmap-8831 deletion completed in 6.105277904s • [SLOW TEST:8.221 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:05:12.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Mar 11 14:05:14.898: INFO: Pod pod-hostip-f209679e-1039-475c-99af-3fddba182318 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:05:14.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4398" for this suite. Mar 11 14:05:36.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:05:37.019: INFO: namespace pods-4398 deletion completed in 22.117989558s • [SLOW TEST:24.207 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:05:37.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-5vmk STEP: Creating a pod to test atomic-volume-subpath Mar 11 14:05:37.143: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5vmk" in namespace "subpath-3701" to be "success or failure" Mar 11 14:05:37.175: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Pending", Reason="", readiness=false. Elapsed: 31.938055ms Mar 11 14:05:39.179: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035398776s Mar 11 14:05:41.188: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 4.044892657s Mar 11 14:05:43.192: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 6.048303154s Mar 11 14:05:45.195: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 8.052137145s Mar 11 14:05:47.209: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 10.065677262s Mar 11 14:05:49.213: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 12.070069902s Mar 11 14:05:51.217: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 14.074091125s Mar 11 14:05:53.221: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 16.077756712s Mar 11 14:05:55.224: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 18.08121406s Mar 11 14:05:57.228: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 20.084911393s Mar 11 14:05:59.232: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Running", Reason="", readiness=true. Elapsed: 22.088847473s Mar 11 14:06:01.236: INFO: Pod "pod-subpath-test-secret-5vmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092571773s STEP: Saw pod success Mar 11 14:06:01.236: INFO: Pod "pod-subpath-test-secret-5vmk" satisfied condition "success or failure" Mar 11 14:06:01.238: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-5vmk container test-container-subpath-secret-5vmk: STEP: delete the pod Mar 11 14:06:01.276: INFO: Waiting for pod pod-subpath-test-secret-5vmk to disappear Mar 11 14:06:01.284: INFO: Pod pod-subpath-test-secret-5vmk no longer exists STEP: Deleting pod pod-subpath-test-secret-5vmk Mar 11 14:06:01.284: INFO: Deleting pod "pod-subpath-test-secret-5vmk" in namespace "subpath-3701" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:06:01.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3701" for this suite. Mar 11 14:06:07.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:06:07.368: INFO: namespace subpath-3701 deletion completed in 6.07858446s • [SLOW TEST:30.348 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:06:07.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:06:09.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9081" for this suite. Mar 11 14:06:47.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:06:47.595: INFO: namespace kubelet-test-9081 deletion completed in 38.101146176s • [SLOW TEST:40.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:06:47.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 14:06:47.656: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.273943ms) Mar 11 14:06:47.659: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.442066ms) Mar 11 14:06:47.663: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.04149ms) Mar 11 14:06:47.666: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.065023ms) Mar 11 14:06:47.668: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.732104ms) Mar 11 14:06:47.671: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.884945ms) Mar 11 14:06:47.674: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.935277ms) Mar 11 14:06:47.677: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.696828ms) Mar 11 14:06:47.679: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.297122ms) Mar 11 14:06:47.682: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.613379ms) Mar 11 14:06:47.684: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.238337ms) Mar 11 14:06:47.690: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.721717ms) Mar 11 14:06:47.694: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.993171ms) Mar 11 14:06:47.697: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.81769ms) Mar 11 14:06:47.699: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.637055ms) Mar 11 14:06:47.725: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 25.117053ms) Mar 11 14:06:47.727: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.773142ms) Mar 11 14:06:47.730: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.030086ms) Mar 11 14:06:47.733: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.696849ms) Mar 11 14:06:47.736: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.652993ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:06:47.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7052" for this suite. Mar 11 14:06:53.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:06:53.821: INFO: namespace proxy-7052 deletion completed in 6.081836207s • [SLOW TEST:6.225 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:06:53.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 11 14:06:53.879: INFO: Waiting up to 5m0s for pod "pod-2557377c-db76-48dd-9884-bfa6a73300f4" in namespace "emptydir-9857" to be "success or failure" Mar 11 14:06:53.883: INFO: Pod "pod-2557377c-db76-48dd-9884-bfa6a73300f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450365ms Mar 11 14:06:55.887: INFO: Pod "pod-2557377c-db76-48dd-9884-bfa6a73300f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008343856s STEP: Saw pod success Mar 11 14:06:55.887: INFO: Pod "pod-2557377c-db76-48dd-9884-bfa6a73300f4" satisfied condition "success or failure" Mar 11 14:06:55.890: INFO: Trying to get logs from node iruya-worker2 pod pod-2557377c-db76-48dd-9884-bfa6a73300f4 container test-container: STEP: delete the pod Mar 11 14:06:55.920: INFO: Waiting for pod pod-2557377c-db76-48dd-9884-bfa6a73300f4 to disappear Mar 11 14:06:55.926: INFO: Pod pod-2557377c-db76-48dd-9884-bfa6a73300f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:06:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9857" for this suite. Mar 11 14:07:01.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:07:02.019: INFO: namespace emptydir-9857 deletion completed in 6.09008024s • [SLOW TEST:8.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:07:02.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 11 14:07:02.072: INFO: PodSpec: initContainers in spec.initContainers Mar 11 14:07:42.393: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bd192039-8fb0-4ac9-a1d8-eaa331eb9ceb", GenerateName:"", Namespace:"init-container-7328", SelfLink:"/api/v1/namespaces/init-container-7328/pods/pod-init-bd192039-8fb0-4ac9-a1d8-eaa331eb9ceb", UID:"3c89820c-b59e-4623-9a07-668374f451dc", ResourceVersion:"556857", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719532422, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"72062108"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-m7hdw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022eb3c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m7hdw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m7hdw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-m7hdw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002778378), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ab3200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002778400)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002778420)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002778428), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00277842c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532422, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532422, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532422, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532422, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.7", PodIP:"10.244.2.74", StartTime:(*v1.Time)(0xc001f697e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003c7880)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003c78f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://c08eacf67b1c5a2618af0dcec83a0be2e5c19b5a2952e2d117607ff29e8dac1b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f69820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f69800), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:07:42.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7328" for this suite. Mar 11 14:08:04.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:08:04.714: INFO: namespace init-container-7328 deletion completed in 22.13122697s • [SLOW TEST:62.695 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:08:04.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 11 14:08:04.795: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 11 14:08:09.799: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:08:10.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1363" for this suite. Mar 11 14:08:16.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:08:16.954: INFO: namespace replication-controller-1363 deletion completed in 6.088243967s • [SLOW TEST:12.240 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:08:16.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 14:08:17.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4345' Mar 11 14:08:17.115: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 14:08:17.115: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Mar 11 14:08:17.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4345' Mar 11 14:08:17.229: INFO: stderr: "" Mar 11 14:08:17.229: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:08:17.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4345" for this suite. Mar 11 14:08:23.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:08:23.303: INFO: namespace kubectl-4345 deletion completed in 6.070992462s • [SLOW TEST:6.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:08:23.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 11 14:08:23.355: INFO: Waiting up to 5m0s for pod "pod-436643f6-a023-4b9e-8ea3-df948478382f" in namespace "emptydir-6030" to be "success or failure" Mar 11 14:08:23.365: INFO: Pod "pod-436643f6-a023-4b9e-8ea3-df948478382f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.044524ms Mar 11 14:08:25.369: INFO: Pod "pod-436643f6-a023-4b9e-8ea3-df948478382f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013573586s Mar 11 14:08:27.372: INFO: Pod "pod-436643f6-a023-4b9e-8ea3-df948478382f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017178376s STEP: Saw pod success Mar 11 14:08:27.372: INFO: Pod "pod-436643f6-a023-4b9e-8ea3-df948478382f" satisfied condition "success or failure" Mar 11 14:08:27.390: INFO: Trying to get logs from node iruya-worker pod pod-436643f6-a023-4b9e-8ea3-df948478382f container test-container: STEP: delete the pod Mar 11 14:08:27.426: INFO: Waiting for pod pod-436643f6-a023-4b9e-8ea3-df948478382f to disappear Mar 11 14:08:27.436: INFO: Pod pod-436643f6-a023-4b9e-8ea3-df948478382f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:08:27.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6030" for this suite. Mar 11 14:08:33.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:08:33.525: INFO: namespace emptydir-6030 deletion completed in 6.085556945s • [SLOW TEST:10.222 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:08:33.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Mar 11 14:08:33.584: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 11 14:08:33.590: INFO: Waiting for terminating namespaces to be deleted... Mar 11 14:08:33.592: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Mar 11 14:08:33.596: INFO: kindnet-9jdkr from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 14:08:33.596: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 14:08:33.596: INFO: kube-proxy-nf96r from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 14:08:33.596: INFO: Container kube-proxy ready: true, restart count 0 Mar 11 14:08:33.596: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Mar 11 14:08:33.600: INFO: kindnet-d7zdc from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 14:08:33.600: INFO: Container kindnet-cni ready: true, restart count 0 Mar 11 14:08:33.600: INFO: kube-proxy-clpmt from kube-system started at 2020-03-08 14:39:47 +0000 UTC (1 container statuses recorded) Mar 11 14:08:33.600: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ea101e7a-1854-4702-bae6-13a1240a6ef1 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ea101e7a-1854-4702-bae6-13a1240a6ef1 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ea101e7a-1854-4702-bae6-13a1240a6ef1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:08:37.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2111" for this suite. Mar 11 14:08:45.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:08:45.830: INFO: namespace sched-pred-2111 deletion completed in 8.084360348s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:12.305 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:08:45.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 11 14:08:48.484: INFO: Successfully updated pod "annotationupdate320f8c53-0f39-4edb-a445-907dc99d80bb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:08:50.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3488" for this suite. Mar 11 14:09:12.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:09:12.622: INFO: namespace downward-api-3488 deletion completed in 22.111087567s • [SLOW TEST:26.791 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:09:12.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-9e8ebc03-2fb0-415d-afeb-0991e0be8a8f STEP: Creating a pod to test consume secrets Mar 11 14:09:12.693: INFO: Waiting up to 5m0s for pod "pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1" in namespace "secrets-206" to be "success or failure" Mar 11 14:09:12.696: INFO: Pod "pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619051ms Mar 11 14:09:14.699: INFO: Pod "pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006308596s STEP: Saw pod success Mar 11 14:09:14.699: INFO: Pod "pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1" satisfied condition "success or failure" Mar 11 14:09:14.702: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1 container secret-volume-test: STEP: delete the pod Mar 11 14:09:14.731: INFO: Waiting for pod pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1 to disappear Mar 11 14:09:14.737: INFO: Pod pod-secrets-12c60c65-2068-4dfe-942e-7901237649b1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:09:14.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-206" for this suite. Mar 11 14:09:20.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:09:20.819: INFO: namespace secrets-206 deletion completed in 6.07829795s • [SLOW TEST:8.197 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:09:20.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b Mar 11 14:09:20.929: INFO: Pod name my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b: Found 0 pods out of 1 Mar 11 14:09:25.935: INFO: Pod name my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b: Found 1 pods out of 1 Mar 11 14:09:25.935: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b" are running Mar 11 14:09:25.937: INFO: Pod "my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b-ztzs9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:09:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:09:22 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:09:22 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:09:20 +0000 UTC Reason: Message:}]) Mar 11 14:09:25.937: INFO: Trying to dial the pod Mar 11 14:09:30.949: INFO: Controller my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b: Got expected result from replica 1 [my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b-ztzs9]: "my-hostname-basic-bb05b011-6eca-4428-8254-bb04903cfd8b-ztzs9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:09:30.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6576" for this suite. Mar 11 14:09:36.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:09:37.049: INFO: namespace replication-controller-6576 deletion completed in 6.096444601s • [SLOW TEST:16.230 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:09:37.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 14:09:37.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5884' Mar 11 14:09:37.198: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 11 14:09:37.198: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 11 14:09:37.223: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bh2wd] Mar 11 14:09:37.223: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bh2wd" in namespace "kubectl-5884" to be "running and ready" Mar 11 14:09:37.225: INFO: Pod "e2e-test-nginx-rc-bh2wd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.947131ms Mar 11 14:09:39.229: INFO: Pod "e2e-test-nginx-rc-bh2wd": Phase="Running", Reason="", readiness=true. Elapsed: 2.006138866s Mar 11 14:09:39.229: INFO: Pod "e2e-test-nginx-rc-bh2wd" satisfied condition "running and ready" Mar 11 14:09:39.229: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bh2wd] Mar 11 14:09:39.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5884' Mar 11 14:09:39.357: INFO: stderr: "" Mar 11 14:09:39.357: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Mar 11 14:09:39.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5884' Mar 11 14:09:39.465: INFO: stderr: "" Mar 11 14:09:39.465: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:09:39.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5884" for this suite. Mar 11 14:09:45.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:09:45.563: INFO: namespace kubectl-5884 deletion completed in 6.095018435s • [SLOW TEST:8.514 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:09:45.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Mar 11 14:09:45.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3204' Mar 11 14:09:45.924: INFO: stderr: "" Mar 11 14:09:45.924: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Mar 11 14:09:46.928: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:09:46.928: INFO: Found 0 / 1 Mar 11 14:09:47.929: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:09:47.929: INFO: Found 1 / 1 Mar 11 14:09:47.929: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 11 14:09:47.932: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:09:47.932: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 11 14:09:47.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204' Mar 11 14:09:48.060: INFO: stderr: "" Mar 11 14:09:48.060: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 14:09:47.067 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 14:09:47.068 # Server started, Redis version 3.2.12\n1:M 11 Mar 14:09:47.068 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 14:09:47.068 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 11 14:09:48.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204 --tail=1' Mar 11 14:09:48.150: INFO: stderr: "" Mar 11 14:09:48.150: INFO: stdout: "1:M 11 Mar 14:09:47.068 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 11 14:09:48.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204 --limit-bytes=1' Mar 11 14:09:48.248: INFO: stderr: "" Mar 11 14:09:48.248: INFO: stdout: " " STEP: exposing timestamps Mar 11 14:09:48.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204 --tail=1 --timestamps' Mar 11 14:09:48.319: INFO: stderr: "" Mar 11 14:09:48.319: INFO: stdout: "2020-03-11T14:09:47.068390868Z 1:M 11 Mar 14:09:47.068 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 11 14:09:50.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204 --since=1s' Mar 11 14:09:50.945: INFO: stderr: "" Mar 11 14:09:50.945: INFO: stdout: "" Mar 11 14:09:50.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-t2bv7 redis-master --namespace=kubectl-3204 --since=24h' Mar 11 14:09:51.049: INFO: stderr: "" Mar 11 14:09:51.049: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 11 Mar 14:09:47.067 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 11 Mar 14:09:47.068 # Server started, Redis version 3.2.12\n1:M 11 Mar 14:09:47.068 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 11 Mar 14:09:47.068 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Mar 11 14:09:51.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3204' Mar 11 14:09:51.166: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 14:09:51.166: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 11 14:09:51.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3204' Mar 11 14:09:51.246: INFO: stderr: "No resources found.\n" Mar 11 14:09:51.246: INFO: stdout: "" Mar 11 14:09:51.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3204 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 14:09:51.314: INFO: stderr: "" Mar 11 14:09:51.314: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:09:51.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3204" for this suite. Mar 11 14:10:13.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:10:13.436: INFO: namespace kubectl-3204 deletion completed in 22.119653997s • [SLOW TEST:27.873 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:10:13.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 11 14:10:13.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2191' Mar 11 14:10:13.563: INFO: stderr: "" Mar 11 14:10:13.563: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 11 14:10:18.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2191 -o json' Mar 11 14:10:18.727: INFO: stderr: "" Mar 11 14:10:18.727: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-11T14:10:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2191\",\n \"resourceVersion\": \"557468\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2191/pods/e2e-test-nginx-pod\",\n \"uid\": \"e49c60d4-18f6-45af-b346-eb8350a88fe6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6nscv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6nscv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6nscv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T14:10:13Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T14:10:15Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T14:10:15Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-11T14:10:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://aa7d1ab641ce7eeb35192a1f4b75090139a7e0ceba5bb776e133c204f35303e4\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-11T14:10:14Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.82\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-11T14:10:13Z\"\n }\n}\n" STEP: replace the image in the pod Mar 11 14:10:18.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2191' Mar 11 14:10:18.996: INFO: stderr: "" Mar 11 14:10:18.996: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Mar 11 14:10:19.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2191' Mar 11 14:10:20.653: INFO: stderr: "" Mar 11 14:10:20.653: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:10:20.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2191" for this suite. Mar 11 14:10:26.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:10:26.746: INFO: namespace kubectl-2191 deletion completed in 6.088912272s • [SLOW TEST:13.309 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:10:26.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 11 14:10:26.776: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Mar 11 14:10:27.416: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 11 14:10:29.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532627, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532627, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532627, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719532627, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 11 14:10:32.166: INFO: Waited 618.256941ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:10:32.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9278" for this suite. Mar 11 14:10:38.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:10:38.780: INFO: namespace aggregator-9278 deletion completed in 6.16954505s • [SLOW TEST:12.034 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:10:38.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Mar 11 14:10:38.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2644' Mar 11 14:10:39.209: INFO: stderr: "" Mar 11 14:10:39.210: INFO: stdout: "pod/pause created\n" Mar 11 14:10:39.210: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 11 14:10:39.210: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2644" to be "running and ready" Mar 11 14:10:39.213: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040017ms Mar 11 14:10:41.216: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006578148s Mar 11 14:10:41.216: INFO: Pod "pause" satisfied condition "running and ready" Mar 11 14:10:41.216: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Mar 11 14:10:41.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2644' Mar 11 14:10:41.308: INFO: stderr: "" Mar 11 14:10:41.308: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 11 14:10:41.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2644' Mar 11 14:10:41.395: INFO: stderr: "" Mar 11 14:10:41.395: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 11 14:10:41.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2644' Mar 11 14:10:41.471: INFO: stderr: "" Mar 11 14:10:41.471: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 11 14:10:41.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2644' Mar 11 14:10:41.534: INFO: stderr: "" Mar 11 14:10:41.534: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Mar 11 14:10:41.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2644' Mar 11 14:10:41.652: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 11 14:10:41.652: INFO: stdout: "pod \"pause\" force deleted\n" Mar 11 14:10:41.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2644' Mar 11 14:10:41.767: INFO: stderr: "No resources found.\n" Mar 11 14:10:41.767: INFO: stdout: "" Mar 11 14:10:41.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2644 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 11 14:10:41.849: INFO: stderr: "" Mar 11 14:10:41.849: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:10:41.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2644" for this suite. Mar 11 14:10:47.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:10:47.949: INFO: namespace kubectl-2644 deletion completed in 6.097244409s • [SLOW TEST:9.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:10:47.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-24bcc6c8-969c-42e0-b05a-6b233beea0db STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:10:50.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1488" for this suite. Mar 11 14:11:12.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:11:12.197: INFO: namespace configmap-1488 deletion completed in 22.120005731s • [SLOW TEST:24.247 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:11:12.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 11 14:11:20.320: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.320: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.349372 6 log.go:172] (0xc0037a4dc0) (0xc0029dc8c0) Create stream I0311 14:11:20.349412 6 log.go:172] (0xc0037a4dc0) (0xc0029dc8c0) Stream added, broadcasting: 1 I0311 14:11:20.354577 6 log.go:172] (0xc0037a4dc0) Reply frame received for 1 I0311 14:11:20.354629 6 log.go:172] (0xc0037a4dc0) (0xc001da80a0) Create stream I0311 14:11:20.354651 6 log.go:172] (0xc0037a4dc0) (0xc001da80a0) Stream added, broadcasting: 3 I0311 14:11:20.357509 6 log.go:172] (0xc0037a4dc0) Reply frame received for 3 I0311 14:11:20.357539 6 log.go:172] (0xc0037a4dc0) (0xc00145e000) Create stream I0311 14:11:20.357550 6 log.go:172] (0xc0037a4dc0) (0xc00145e000) Stream added, broadcasting: 5 I0311 14:11:20.358989 6 log.go:172] (0xc0037a4dc0) Reply frame received for 5 I0311 14:11:20.412520 6 log.go:172] (0xc0037a4dc0) Data frame received for 5 I0311 14:11:20.412551 6 log.go:172] (0xc00145e000) (5) Data frame handling I0311 14:11:20.412571 6 log.go:172] (0xc0037a4dc0) Data frame received for 3 I0311 14:11:20.412580 6 log.go:172] (0xc001da80a0) (3) Data frame handling I0311 14:11:20.412589 6 log.go:172] (0xc001da80a0) (3) Data frame sent I0311 14:11:20.412601 6 log.go:172] (0xc0037a4dc0) Data frame received for 3 I0311 14:11:20.412612 6 log.go:172] (0xc001da80a0) (3) Data frame handling I0311 14:11:20.413807 6 log.go:172] (0xc0037a4dc0) Data frame received for 1 I0311 14:11:20.413825 6 log.go:172] (0xc0029dc8c0) (1) Data frame handling I0311 14:11:20.413846 6 log.go:172] (0xc0029dc8c0) (1) Data frame sent I0311 14:11:20.413975 6 log.go:172] (0xc0037a4dc0) (0xc0029dc8c0) Stream removed, broadcasting: 1 I0311 14:11:20.414071 6 log.go:172] (0xc0037a4dc0) (0xc0029dc8c0) Stream removed, broadcasting: 1 I0311 14:11:20.414086 6 log.go:172] (0xc0037a4dc0) (0xc001da80a0) Stream removed, broadcasting: 3 I0311 14:11:20.414101 6 log.go:172] (0xc0037a4dc0) (0xc00145e000) Stream removed, broadcasting: 5 Mar 11 14:11:20.414: INFO: Exec stderr: "" Mar 11 14:11:20.414: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.414: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.414292 6 log.go:172] (0xc0037a4dc0) Go away received I0311 14:11:20.439781 6 log.go:172] (0xc001ec4790) (0xc001480460) Create stream I0311 14:11:20.439806 6 log.go:172] (0xc001ec4790) (0xc001480460) Stream added, broadcasting: 1 I0311 14:11:20.442075 6 log.go:172] (0xc001ec4790) Reply frame received for 1 I0311 14:11:20.442108 6 log.go:172] (0xc001ec4790) (0xc002d92000) Create stream I0311 14:11:20.442163 6 log.go:172] (0xc001ec4790) (0xc002d92000) Stream added, broadcasting: 3 I0311 14:11:20.443004 6 log.go:172] (0xc001ec4790) Reply frame received for 3 I0311 14:11:20.443061 6 log.go:172] (0xc001ec4790) (0xc002d920a0) Create stream I0311 14:11:20.443091 6 log.go:172] (0xc001ec4790) (0xc002d920a0) Stream added, broadcasting: 5 I0311 14:11:20.443885 6 log.go:172] (0xc001ec4790) Reply frame received for 5 I0311 14:11:20.492225 6 log.go:172] (0xc001ec4790) Data frame received for 3 I0311 14:11:20.492263 6 log.go:172] (0xc002d92000) (3) Data frame handling I0311 14:11:20.492273 6 log.go:172] (0xc002d92000) (3) Data frame sent I0311 14:11:20.492279 6 log.go:172] (0xc001ec4790) Data frame received for 3 I0311 14:11:20.492287 6 log.go:172] (0xc002d92000) (3) Data frame handling I0311 14:11:20.492318 6 log.go:172] (0xc001ec4790) Data frame received for 5 I0311 14:11:20.492331 6 log.go:172] (0xc002d920a0) (5) Data frame handling I0311 14:11:20.493422 6 log.go:172] (0xc001ec4790) Data frame received for 1 I0311 14:11:20.493441 6 log.go:172] (0xc001480460) (1) Data frame handling I0311 14:11:20.493456 6 log.go:172] (0xc001480460) (1) Data frame sent I0311 14:11:20.493470 6 log.go:172] (0xc001ec4790) (0xc001480460) Stream removed, broadcasting: 1 I0311 14:11:20.493497 6 log.go:172] (0xc001ec4790) Go away received I0311 14:11:20.493598 6 log.go:172] (0xc001ec4790) (0xc001480460) Stream removed, broadcasting: 1 I0311 14:11:20.493614 6 log.go:172] (0xc001ec4790) (0xc002d92000) Stream removed, broadcasting: 3 I0311 14:11:20.493625 6 log.go:172] (0xc001ec4790) (0xc002d920a0) Stream removed, broadcasting: 5 Mar 11 14:11:20.493: INFO: Exec stderr: "" Mar 11 14:11:20.493: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.493: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.515714 6 log.go:172] (0xc001ec4fd0) (0xc001480aa0) Create stream I0311 14:11:20.515735 6 log.go:172] (0xc001ec4fd0) (0xc001480aa0) Stream added, broadcasting: 1 I0311 14:11:20.517794 6 log.go:172] (0xc001ec4fd0) Reply frame received for 1 I0311 14:11:20.517827 6 log.go:172] (0xc001ec4fd0) (0xc00145e0a0) Create stream I0311 14:11:20.517836 6 log.go:172] (0xc001ec4fd0) (0xc00145e0a0) Stream added, broadcasting: 3 I0311 14:11:20.518547 6 log.go:172] (0xc001ec4fd0) Reply frame received for 3 I0311 14:11:20.518571 6 log.go:172] (0xc001ec4fd0) (0xc002d92140) Create stream I0311 14:11:20.518578 6 log.go:172] (0xc001ec4fd0) (0xc002d92140) Stream added, broadcasting: 5 I0311 14:11:20.519088 6 log.go:172] (0xc001ec4fd0) Reply frame received for 5 I0311 14:11:20.586821 6 log.go:172] (0xc001ec4fd0) Data frame received for 3 I0311 14:11:20.586887 6 log.go:172] (0xc00145e0a0) (3) Data frame handling I0311 14:11:20.586913 6 log.go:172] (0xc00145e0a0) (3) Data frame sent I0311 14:11:20.587022 6 log.go:172] (0xc001ec4fd0) Data frame received for 3 I0311 14:11:20.587056 6 log.go:172] (0xc00145e0a0) (3) Data frame handling I0311 14:11:20.587567 6 log.go:172] (0xc001ec4fd0) Data frame received for 5 I0311 14:11:20.587584 6 log.go:172] (0xc002d92140) (5) Data frame handling I0311 14:11:20.588931 6 log.go:172] (0xc001ec4fd0) Data frame received for 1 I0311 14:11:20.588952 6 log.go:172] (0xc001480aa0) (1) Data frame handling I0311 14:11:20.588968 6 log.go:172] (0xc001480aa0) (1) Data frame sent I0311 14:11:20.588982 6 log.go:172] (0xc001ec4fd0) (0xc001480aa0) Stream removed, broadcasting: 1 I0311 14:11:20.588995 6 log.go:172] (0xc001ec4fd0) Go away received I0311 14:11:20.589155 6 log.go:172] (0xc001ec4fd0) (0xc001480aa0) Stream removed, broadcasting: 1 I0311 14:11:20.589179 6 log.go:172] (0xc001ec4fd0) (0xc00145e0a0) Stream removed, broadcasting: 3 I0311 14:11:20.589194 6 log.go:172] (0xc001ec4fd0) (0xc002d92140) Stream removed, broadcasting: 5 Mar 11 14:11:20.589: INFO: Exec stderr: "" Mar 11 14:11:20.589: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.589: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.614925 6 log.go:172] (0xc000a2c9a0) (0xc002fe4460) Create stream I0311 14:11:20.614995 6 log.go:172] (0xc000a2c9a0) (0xc002fe4460) Stream added, broadcasting: 1 I0311 14:11:20.617220 6 log.go:172] (0xc000a2c9a0) Reply frame received for 1 I0311 14:11:20.617262 6 log.go:172] (0xc000a2c9a0) (0xc003650000) Create stream I0311 14:11:20.617274 6 log.go:172] (0xc000a2c9a0) (0xc003650000) Stream added, broadcasting: 3 I0311 14:11:20.618035 6 log.go:172] (0xc000a2c9a0) Reply frame received for 3 I0311 14:11:20.618063 6 log.go:172] (0xc000a2c9a0) (0xc0036500a0) Create stream I0311 14:11:20.618072 6 log.go:172] (0xc000a2c9a0) (0xc0036500a0) Stream added, broadcasting: 5 I0311 14:11:20.618863 6 log.go:172] (0xc000a2c9a0) Reply frame received for 5 I0311 14:11:20.672330 6 log.go:172] (0xc000a2c9a0) Data frame received for 5 I0311 14:11:20.672356 6 log.go:172] (0xc0036500a0) (5) Data frame handling I0311 14:11:20.672388 6 log.go:172] (0xc000a2c9a0) Data frame received for 3 I0311 14:11:20.672420 6 log.go:172] (0xc003650000) (3) Data frame handling I0311 14:11:20.672443 6 log.go:172] (0xc003650000) (3) Data frame sent I0311 14:11:20.672457 6 log.go:172] (0xc000a2c9a0) Data frame received for 3 I0311 14:11:20.672469 6 log.go:172] (0xc003650000) (3) Data frame handling I0311 14:11:20.673450 6 log.go:172] (0xc000a2c9a0) Data frame received for 1 I0311 14:11:20.673491 6 log.go:172] (0xc002fe4460) (1) Data frame handling I0311 14:11:20.673504 6 log.go:172] (0xc002fe4460) (1) Data frame sent I0311 14:11:20.673516 6 log.go:172] (0xc000a2c9a0) (0xc002fe4460) Stream removed, broadcasting: 1 I0311 14:11:20.673585 6 log.go:172] (0xc000a2c9a0) (0xc002fe4460) Stream removed, broadcasting: 1 I0311 14:11:20.673596 6 log.go:172] (0xc000a2c9a0) (0xc003650000) Stream removed, broadcasting: 3 I0311 14:11:20.673606 6 log.go:172] (0xc000a2c9a0) Go away received I0311 14:11:20.673642 6 log.go:172] (0xc000a2c9a0) (0xc0036500a0) Stream removed, broadcasting: 5 Mar 11 14:11:20.673: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 11 14:11:20.673: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.673: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.695094 6 log.go:172] (0xc000a2d600) (0xc002fe4960) Create stream I0311 14:11:20.695126 6 log.go:172] (0xc000a2d600) (0xc002fe4960) Stream added, broadcasting: 1 I0311 14:11:20.696679 6 log.go:172] (0xc000a2d600) Reply frame received for 1 I0311 14:11:20.696705 6 log.go:172] (0xc000a2d600) (0xc003650140) Create stream I0311 14:11:20.696711 6 log.go:172] (0xc000a2d600) (0xc003650140) Stream added, broadcasting: 3 I0311 14:11:20.697365 6 log.go:172] (0xc000a2d600) Reply frame received for 3 I0311 14:11:20.697392 6 log.go:172] (0xc000a2d600) (0xc001480b40) Create stream I0311 14:11:20.697400 6 log.go:172] (0xc000a2d600) (0xc001480b40) Stream added, broadcasting: 5 I0311 14:11:20.698009 6 log.go:172] (0xc000a2d600) Reply frame received for 5 I0311 14:11:20.747918 6 log.go:172] (0xc000a2d600) Data frame received for 3 I0311 14:11:20.747955 6 log.go:172] (0xc003650140) (3) Data frame handling I0311 14:11:20.747968 6 log.go:172] (0xc003650140) (3) Data frame sent I0311 14:11:20.747977 6 log.go:172] (0xc000a2d600) Data frame received for 3 I0311 14:11:20.747990 6 log.go:172] (0xc003650140) (3) Data frame handling I0311 14:11:20.748013 6 log.go:172] (0xc000a2d600) Data frame received for 5 I0311 14:11:20.748022 6 log.go:172] (0xc001480b40) (5) Data frame handling I0311 14:11:20.748978 6 log.go:172] (0xc000a2d600) Data frame received for 1 I0311 14:11:20.748996 6 log.go:172] (0xc002fe4960) (1) Data frame handling I0311 14:11:20.749022 6 log.go:172] (0xc002fe4960) (1) Data frame sent I0311 14:11:20.749040 6 log.go:172] (0xc000a2d600) (0xc002fe4960) Stream removed, broadcasting: 1 I0311 14:11:20.749057 6 log.go:172] (0xc000a2d600) Go away received I0311 14:11:20.749205 6 log.go:172] (0xc000a2d600) (0xc002fe4960) Stream removed, broadcasting: 1 I0311 14:11:20.749226 6 log.go:172] (0xc000a2d600) (0xc003650140) Stream removed, broadcasting: 3 I0311 14:11:20.749250 6 log.go:172] (0xc000a2d600) (0xc001480b40) Stream removed, broadcasting: 5 Mar 11 14:11:20.749: INFO: Exec stderr: "" Mar 11 14:11:20.749: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.749: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.768106 6 log.go:172] (0xc0037a4b00) (0xc002d925a0) Create stream I0311 14:11:20.768126 6 log.go:172] (0xc0037a4b00) (0xc002d925a0) Stream added, broadcasting: 1 I0311 14:11:20.769532 6 log.go:172] (0xc0037a4b00) Reply frame received for 1 I0311 14:11:20.769552 6 log.go:172] (0xc0037a4b00) (0xc0036501e0) Create stream I0311 14:11:20.769560 6 log.go:172] (0xc0037a4b00) (0xc0036501e0) Stream added, broadcasting: 3 I0311 14:11:20.770230 6 log.go:172] (0xc0037a4b00) Reply frame received for 3 I0311 14:11:20.770256 6 log.go:172] (0xc0037a4b00) (0xc003650280) Create stream I0311 14:11:20.770264 6 log.go:172] (0xc0037a4b00) (0xc003650280) Stream added, broadcasting: 5 I0311 14:11:20.770893 6 log.go:172] (0xc0037a4b00) Reply frame received for 5 I0311 14:11:20.832940 6 log.go:172] (0xc0037a4b00) Data frame received for 5 I0311 14:11:20.832991 6 log.go:172] (0xc003650280) (5) Data frame handling I0311 14:11:20.833012 6 log.go:172] (0xc0037a4b00) Data frame received for 3 I0311 14:11:20.833020 6 log.go:172] (0xc0036501e0) (3) Data frame handling I0311 14:11:20.833030 6 log.go:172] (0xc0036501e0) (3) Data frame sent I0311 14:11:20.833037 6 log.go:172] (0xc0037a4b00) Data frame received for 3 I0311 14:11:20.833041 6 log.go:172] (0xc0036501e0) (3) Data frame handling I0311 14:11:20.833825 6 log.go:172] (0xc0037a4b00) Data frame received for 1 I0311 14:11:20.833860 6 log.go:172] (0xc002d925a0) (1) Data frame handling I0311 14:11:20.833880 6 log.go:172] (0xc002d925a0) (1) Data frame sent I0311 14:11:20.833902 6 log.go:172] (0xc0037a4b00) (0xc002d925a0) Stream removed, broadcasting: 1 I0311 14:11:20.833927 6 log.go:172] (0xc0037a4b00) Go away received I0311 14:11:20.834038 6 log.go:172] (0xc0037a4b00) (0xc002d925a0) Stream removed, broadcasting: 1 I0311 14:11:20.834056 6 log.go:172] (0xc0037a4b00) (0xc0036501e0) Stream removed, broadcasting: 3 I0311 14:11:20.834064 6 log.go:172] (0xc0037a4b00) (0xc003650280) Stream removed, broadcasting: 5 Mar 11 14:11:20.834: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 11 14:11:20.834: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.834: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.855700 6 log.go:172] (0xc000a2de40) (0xc002fe4d20) Create stream I0311 14:11:20.855721 6 log.go:172] (0xc000a2de40) (0xc002fe4d20) Stream added, broadcasting: 1 I0311 14:11:20.857467 6 log.go:172] (0xc000a2de40) Reply frame received for 1 I0311 14:11:20.857488 6 log.go:172] (0xc000a2de40) (0xc001480be0) Create stream I0311 14:11:20.857495 6 log.go:172] (0xc000a2de40) (0xc001480be0) Stream added, broadcasting: 3 I0311 14:11:20.858251 6 log.go:172] (0xc000a2de40) Reply frame received for 3 I0311 14:11:20.858291 6 log.go:172] (0xc000a2de40) (0xc001480e60) Create stream I0311 14:11:20.858305 6 log.go:172] (0xc000a2de40) (0xc001480e60) Stream added, broadcasting: 5 I0311 14:11:20.859498 6 log.go:172] (0xc000a2de40) Reply frame received for 5 I0311 14:11:20.913121 6 log.go:172] (0xc000a2de40) Data frame received for 5 I0311 14:11:20.913138 6 log.go:172] (0xc001480e60) (5) Data frame handling I0311 14:11:20.913179 6 log.go:172] (0xc000a2de40) Data frame received for 3 I0311 14:11:20.913209 6 log.go:172] (0xc001480be0) (3) Data frame handling I0311 14:11:20.913238 6 log.go:172] (0xc001480be0) (3) Data frame sent I0311 14:11:20.913253 6 log.go:172] (0xc000a2de40) Data frame received for 3 I0311 14:11:20.913263 6 log.go:172] (0xc001480be0) (3) Data frame handling I0311 14:11:20.914677 6 log.go:172] (0xc000a2de40) Data frame received for 1 I0311 14:11:20.914698 6 log.go:172] (0xc002fe4d20) (1) Data frame handling I0311 14:11:20.914714 6 log.go:172] (0xc002fe4d20) (1) Data frame sent I0311 14:11:20.914781 6 log.go:172] (0xc000a2de40) (0xc002fe4d20) Stream removed, broadcasting: 1 I0311 14:11:20.914864 6 log.go:172] (0xc000a2de40) (0xc002fe4d20) Stream removed, broadcasting: 1 I0311 14:11:20.914879 6 log.go:172] (0xc000a2de40) (0xc001480be0) Stream removed, broadcasting: 3 I0311 14:11:20.914891 6 log.go:172] (0xc000a2de40) (0xc001480e60) Stream removed, broadcasting: 5 Mar 11 14:11:20.914: INFO: Exec stderr: "" I0311 14:11:20.914930 6 log.go:172] (0xc000a2de40) Go away received Mar 11 14:11:20.914: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.914: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:20.935496 6 log.go:172] (0xc0037a5d90) (0xc002d928c0) Create stream I0311 14:11:20.935521 6 log.go:172] (0xc0037a5d90) (0xc002d928c0) Stream added, broadcasting: 1 I0311 14:11:20.937289 6 log.go:172] (0xc0037a5d90) Reply frame received for 1 I0311 14:11:20.937318 6 log.go:172] (0xc0037a5d90) (0xc002fe4dc0) Create stream I0311 14:11:20.937327 6 log.go:172] (0xc0037a5d90) (0xc002fe4dc0) Stream added, broadcasting: 3 I0311 14:11:20.937926 6 log.go:172] (0xc0037a5d90) Reply frame received for 3 I0311 14:11:20.937962 6 log.go:172] (0xc0037a5d90) (0xc002fe4e60) Create stream I0311 14:11:20.937976 6 log.go:172] (0xc0037a5d90) (0xc002fe4e60) Stream added, broadcasting: 5 I0311 14:11:20.938597 6 log.go:172] (0xc0037a5d90) Reply frame received for 5 I0311 14:11:20.996634 6 log.go:172] (0xc0037a5d90) Data frame received for 3 I0311 14:11:20.996661 6 log.go:172] (0xc002fe4dc0) (3) Data frame handling I0311 14:11:20.996675 6 log.go:172] (0xc002fe4dc0) (3) Data frame sent I0311 14:11:20.996684 6 log.go:172] (0xc0037a5d90) Data frame received for 3 I0311 14:11:20.996690 6 log.go:172] (0xc002fe4dc0) (3) Data frame handling I0311 14:11:20.996706 6 log.go:172] (0xc0037a5d90) Data frame received for 5 I0311 14:11:20.996717 6 log.go:172] (0xc002fe4e60) (5) Data frame handling I0311 14:11:20.997911 6 log.go:172] (0xc0037a5d90) Data frame received for 1 I0311 14:11:20.997927 6 log.go:172] (0xc002d928c0) (1) Data frame handling I0311 14:11:20.997947 6 log.go:172] (0xc002d928c0) (1) Data frame sent I0311 14:11:20.997962 6 log.go:172] (0xc0037a5d90) (0xc002d928c0) Stream removed, broadcasting: 1 I0311 14:11:20.997977 6 log.go:172] (0xc0037a5d90) Go away received I0311 14:11:20.998166 6 log.go:172] (0xc0037a5d90) (0xc002d928c0) Stream removed, broadcasting: 1 I0311 14:11:20.998195 6 log.go:172] (0xc0037a5d90) (0xc002fe4dc0) Stream removed, broadcasting: 3 I0311 14:11:20.998206 6 log.go:172] (0xc0037a5d90) (0xc002fe4e60) Stream removed, broadcasting: 5 Mar 11 14:11:20.998: INFO: Exec stderr: "" Mar 11 14:11:20.998: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:20.998: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:21.024596 6 log.go:172] (0xc001122c60) (0xc002fe5180) Create stream I0311 14:11:21.024613 6 log.go:172] (0xc001122c60) (0xc002fe5180) Stream added, broadcasting: 1 I0311 14:11:21.026614 6 log.go:172] (0xc001122c60) Reply frame received for 1 I0311 14:11:21.026642 6 log.go:172] (0xc001122c60) (0xc002d92960) Create stream I0311 14:11:21.026650 6 log.go:172] (0xc001122c60) (0xc002d92960) Stream added, broadcasting: 3 I0311 14:11:21.027275 6 log.go:172] (0xc001122c60) Reply frame received for 3 I0311 14:11:21.027304 6 log.go:172] (0xc001122c60) (0xc002d92a00) Create stream I0311 14:11:21.027316 6 log.go:172] (0xc001122c60) (0xc002d92a00) Stream added, broadcasting: 5 I0311 14:11:21.027989 6 log.go:172] (0xc001122c60) Reply frame received for 5 I0311 14:11:21.084764 6 log.go:172] (0xc001122c60) Data frame received for 5 I0311 14:11:21.084792 6 log.go:172] (0xc002d92a00) (5) Data frame handling I0311 14:11:21.084809 6 log.go:172] (0xc001122c60) Data frame received for 3 I0311 14:11:21.084814 6 log.go:172] (0xc002d92960) (3) Data frame handling I0311 14:11:21.084824 6 log.go:172] (0xc002d92960) (3) Data frame sent I0311 14:11:21.084833 6 log.go:172] (0xc001122c60) Data frame received for 3 I0311 14:11:21.084837 6 log.go:172] (0xc002d92960) (3) Data frame handling I0311 14:11:21.086081 6 log.go:172] (0xc001122c60) Data frame received for 1 I0311 14:11:21.086139 6 log.go:172] (0xc002fe5180) (1) Data frame handling I0311 14:11:21.086157 6 log.go:172] (0xc002fe5180) (1) Data frame sent I0311 14:11:21.086170 6 log.go:172] (0xc001122c60) (0xc002fe5180) Stream removed, broadcasting: 1 I0311 14:11:21.086206 6 log.go:172] (0xc001122c60) Go away received I0311 14:11:21.086337 6 log.go:172] (0xc001122c60) (0xc002fe5180) Stream removed, broadcasting: 1 I0311 14:11:21.086358 6 log.go:172] (0xc001122c60) (0xc002d92960) Stream removed, broadcasting: 3 I0311 14:11:21.086370 6 log.go:172] (0xc001122c60) (0xc002d92a00) Stream removed, broadcasting: 5 Mar 11 14:11:21.086: INFO: Exec stderr: "" Mar 11 14:11:21.086: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6401 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 11 14:11:21.086: INFO: >>> kubeConfig: /root/.kube/config I0311 14:11:21.107948 6 log.go:172] (0xc0039b1550) (0xc00145e640) Create stream I0311 14:11:21.107971 6 log.go:172] (0xc0039b1550) (0xc00145e640) Stream added, broadcasting: 1 I0311 14:11:21.111518 6 log.go:172] (0xc0039b1550) Reply frame received for 1 I0311 14:11:21.111554 6 log.go:172] (0xc0039b1550) (0xc002fe5220) Create stream I0311 14:11:21.111589 6 log.go:172] (0xc0039b1550) (0xc002fe5220) Stream added, broadcasting: 3 I0311 14:11:21.112909 6 log.go:172] (0xc0039b1550) Reply frame received for 3 I0311 14:11:21.112948 6 log.go:172] (0xc0039b1550) (0xc002d92aa0) Create stream I0311 14:11:21.112962 6 log.go:172] (0xc0039b1550) (0xc002d92aa0) Stream added, broadcasting: 5 I0311 14:11:21.114640 6 log.go:172] (0xc0039b1550) Reply frame received for 5 I0311 14:11:21.168466 6 log.go:172] (0xc0039b1550) Data frame received for 5 I0311 14:11:21.168499 6 log.go:172] (0xc002d92aa0) (5) Data frame handling I0311 14:11:21.168520 6 log.go:172] (0xc0039b1550) Data frame received for 3 I0311 14:11:21.168526 6 log.go:172] (0xc002fe5220) (3) Data frame handling I0311 14:11:21.168539 6 log.go:172] (0xc002fe5220) (3) Data frame sent I0311 14:11:21.168545 6 log.go:172] (0xc0039b1550) Data frame received for 3 I0311 14:11:21.168552 6 log.go:172] (0xc002fe5220) (3) Data frame handling I0311 14:11:21.169412 6 log.go:172] (0xc0039b1550) Data frame received for 1 I0311 14:11:21.169434 6 log.go:172] (0xc00145e640) (1) Data frame handling I0311 14:11:21.169450 6 log.go:172] (0xc00145e640) (1) Data frame sent I0311 14:11:21.169464 6 log.go:172] (0xc0039b1550) (0xc00145e640) Stream removed, broadcasting: 1 I0311 14:11:21.169483 6 log.go:172] (0xc0039b1550) Go away received I0311 14:11:21.169581 6 log.go:172] (0xc0039b1550) (0xc00145e640) Stream removed, broadcasting: 1 I0311 14:11:21.169604 6 log.go:172] (0xc0039b1550) (0xc002fe5220) Stream removed, broadcasting: 3 I0311 14:11:21.169615 6 log.go:172] (0xc0039b1550) (0xc002d92aa0) Stream removed, broadcasting: 5 Mar 11 14:11:21.169: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:11:21.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6401" for this suite. Mar 11 14:11:59.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:11:59.278: INFO: namespace e2e-kubelet-etc-hosts-6401 deletion completed in 38.10470144s • [SLOW TEST:47.081 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:11:59.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 11 14:12:03.394: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 14:12:03.401: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 14:12:05.401: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 14:12:05.405: INFO: Pod pod-with-prestop-http-hook still exists Mar 11 14:12:07.401: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 11 14:12:07.405: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:12:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5215" for this suite. Mar 11 14:12:29.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:12:29.493: INFO: namespace container-lifecycle-hook-5215 deletion completed in 22.078083666s • [SLOW TEST:30.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:12:29.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-0e64594c-c255-4015-ac85-f90534f5b216 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:12:29.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7071" for this suite. Mar 11 14:12:35.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:12:35.635: INFO: namespace secrets-7071 deletion completed in 6.099385892s • [SLOW TEST:6.142 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:12:35.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9ac746a7-ed5f-4490-aa34-57bed65b3e37 STEP: Creating a pod to test consume configMaps Mar 11 14:12:35.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f" in namespace "projected-5421" to be "success or failure" Mar 11 14:12:35.710: INFO: Pod "pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083947ms Mar 11 14:12:37.714: INFO: Pod "pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010099767s STEP: Saw pod success Mar 11 14:12:37.714: INFO: Pod "pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f" satisfied condition "success or failure" Mar 11 14:12:37.717: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f container projected-configmap-volume-test: STEP: delete the pod Mar 11 14:12:37.736: INFO: Waiting for pod pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f to disappear Mar 11 14:12:37.752: INFO: Pod pod-projected-configmaps-81e704de-a195-435c-a877-dab2f4a29a0f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:12:37.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5421" for this suite. Mar 11 14:12:43.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:12:43.851: INFO: namespace projected-5421 deletion completed in 6.094426455s • [SLOW TEST:8.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:12:43.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Mar 11 14:12:43.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-632' Mar 11 14:12:45.615: INFO: stderr: "" Mar 11 14:12:45.615: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 11 14:12:46.628: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:12:46.628: INFO: Found 0 / 1 Mar 11 14:12:47.620: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:12:47.620: INFO: Found 0 / 1 Mar 11 14:12:48.619: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:12:48.619: INFO: Found 1 / 1 Mar 11 14:12:48.619: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 11 14:12:48.622: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:12:48.622: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 11 14:12:48.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-76z2j --namespace=kubectl-632 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 11 14:12:48.721: INFO: stderr: "" Mar 11 14:12:48.721: INFO: stdout: "pod/redis-master-76z2j patched\n" STEP: checking annotations Mar 11 14:12:48.731: INFO: Selector matched 1 pods for map[app:redis] Mar 11 14:12:48.731: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:12:48.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-632" for this suite. Mar 11 14:13:10.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:13:10.824: INFO: namespace kubectl-632 deletion completed in 22.090970535s • [SLOW TEST:26.973 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:13:10.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Mar 11 14:13:10.880: INFO: Waiting up to 5m0s for pod "client-containers-d7a441d2-755b-42f6-8716-870232dc7374" in namespace "containers-8951" to be "success or failure" Mar 11 14:13:10.884: INFO: Pod "client-containers-d7a441d2-755b-42f6-8716-870232dc7374": Phase="Pending", Reason="", readiness=false. Elapsed: 3.431873ms Mar 11 14:13:12.887: INFO: Pod "client-containers-d7a441d2-755b-42f6-8716-870232dc7374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006544879s Mar 11 14:13:14.890: INFO: Pod "client-containers-d7a441d2-755b-42f6-8716-870232dc7374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009327375s STEP: Saw pod success Mar 11 14:13:14.890: INFO: Pod "client-containers-d7a441d2-755b-42f6-8716-870232dc7374" satisfied condition "success or failure" Mar 11 14:13:14.892: INFO: Trying to get logs from node iruya-worker2 pod client-containers-d7a441d2-755b-42f6-8716-870232dc7374 container test-container: STEP: delete the pod Mar 11 14:13:14.909: INFO: Waiting for pod client-containers-d7a441d2-755b-42f6-8716-870232dc7374 to disappear Mar 11 14:13:14.920: INFO: Pod client-containers-d7a441d2-755b-42f6-8716-870232dc7374 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:13:14.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8951" for this suite. Mar 11 14:13:20.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:13:21.019: INFO: namespace containers-8951 deletion completed in 6.097067829s • [SLOW TEST:10.194 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:13:21.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2563 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2563 STEP: Creating statefulset with conflicting port in namespace statefulset-2563 STEP: Waiting until pod test-pod will start running in namespace statefulset-2563 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2563 Mar 11 14:13:23.118: INFO: Observed stateful pod in namespace: statefulset-2563, name: ss-0, uid: 9db7a4df-1d31-475d-87ef-113b50b86774, status phase: Pending. Waiting for statefulset controller to delete. Mar 11 14:13:24.284: INFO: Observed stateful pod in namespace: statefulset-2563, name: ss-0, uid: 9db7a4df-1d31-475d-87ef-113b50b86774, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 14:13:24.324: INFO: Observed stateful pod in namespace: statefulset-2563, name: ss-0, uid: 9db7a4df-1d31-475d-87ef-113b50b86774, status phase: Failed. Waiting for statefulset controller to delete. Mar 11 14:13:24.331: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2563 STEP: Removing pod with conflicting port in namespace statefulset-2563 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2563 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Mar 11 14:13:26.408: INFO: Deleting all statefulset in ns statefulset-2563 Mar 11 14:13:26.410: INFO: Scaling statefulset ss to 0 Mar 11 14:13:36.426: INFO: Waiting for statefulset status.replicas updated to 0 Mar 11 14:13:36.429: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:13:36.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2563" for this suite. Mar 11 14:13:42.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:13:42.536: INFO: namespace statefulset-2563 deletion completed in 6.081421759s • [SLOW TEST:21.517 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:13:42.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-be34ff71-1bbc-40fb-9820-27f473c59516 STEP: Creating a pod to test consume configMaps Mar 11 14:13:42.592: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3" in namespace "projected-9848" to be "success or failure" Mar 11 14:13:42.598: INFO: Pod "pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.370332ms Mar 11 14:13:44.601: INFO: Pod "pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008770541s STEP: Saw pod success Mar 11 14:13:44.601: INFO: Pod "pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3" satisfied condition "success or failure" Mar 11 14:13:44.603: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3 container projected-configmap-volume-test: STEP: delete the pod Mar 11 14:13:44.629: INFO: Waiting for pod pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3 to disappear Mar 11 14:13:44.633: INFO: Pod pod-projected-configmaps-859faf41-189d-4cac-9734-1e09464a27c3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:13:44.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9848" for this suite. Mar 11 14:13:50.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:13:50.728: INFO: namespace projected-9848 deletion completed in 6.091798276s • [SLOW TEST:8.192 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:13:50.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0311 14:14:30.790012 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 11 14:14:30.790: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:14:30.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3678" for this suite. Mar 11 14:14:38.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:14:38.883: INFO: namespace gc-3678 deletion completed in 8.090066574s • [SLOW TEST:48.153 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:14:38.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 14:14:38.965: INFO: Creating ReplicaSet my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4 Mar 11 14:14:38.979: INFO: Pod name my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4: Found 0 pods out of 1 Mar 11 14:14:43.983: INFO: Pod name my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4: Found 1 pods out of 1 Mar 11 14:14:43.983: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4" is running Mar 11 14:14:43.986: INFO: Pod "my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4-fx97c" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:14:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:14:40 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:14:40 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-11 14:14:38 +0000 UTC Reason: Message:}]) Mar 11 14:14:43.986: INFO: Trying to dial the pod Mar 11 14:14:48.999: INFO: Controller my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4: Got expected result from replica 1 [my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4-fx97c]: "my-hostname-basic-007c3218-c154-453a-859e-018a9abb7bc4-fx97c", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:14:48.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8761" for this suite. Mar 11 14:14:55.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:14:55.100: INFO: namespace replicaset-8761 deletion completed in 6.097224472s • [SLOW TEST:16.217 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:14:55.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-lsk7 STEP: Creating a pod to test atomic-volume-subpath Mar 11 14:14:55.164: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lsk7" in namespace "subpath-127" to be "success or failure" Mar 11 14:14:55.168: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697926ms Mar 11 14:14:57.171: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 2.007467691s Mar 11 14:14:59.175: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 4.011504644s Mar 11 14:15:01.179: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 6.015398354s Mar 11 14:15:03.183: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 8.019215685s Mar 11 14:15:05.187: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 10.023206325s Mar 11 14:15:07.191: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 12.027291084s Mar 11 14:15:09.196: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 14.031911622s Mar 11 14:15:11.200: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 16.035852069s Mar 11 14:15:13.203: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 18.039593848s Mar 11 14:15:15.207: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Running", Reason="", readiness=true. Elapsed: 20.043586306s Mar 11 14:15:17.211: INFO: Pod "pod-subpath-test-configmap-lsk7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.047326345s STEP: Saw pod success Mar 11 14:15:17.211: INFO: Pod "pod-subpath-test-configmap-lsk7" satisfied condition "success or failure" Mar 11 14:15:17.214: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-lsk7 container test-container-subpath-configmap-lsk7: STEP: delete the pod Mar 11 14:15:17.233: INFO: Waiting for pod pod-subpath-test-configmap-lsk7 to disappear Mar 11 14:15:17.243: INFO: Pod pod-subpath-test-configmap-lsk7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lsk7 Mar 11 14:15:17.243: INFO: Deleting pod "pod-subpath-test-configmap-lsk7" in namespace "subpath-127" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:15:17.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-127" for this suite. Mar 11 14:15:23.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:15:23.344: INFO: namespace subpath-127 deletion completed in 6.094127348s • [SLOW TEST:28.243 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:15:23.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Mar 11 14:15:47.417: INFO: Container started at 2020-03-11 14:15:24 +0000 UTC, pod became ready at 2020-03-11 14:15:46 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:15:47.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9364" for this suite. Mar 11 14:16:09.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:16:09.512: INFO: namespace container-probe-9364 deletion completed in 22.090857948s • [SLOW TEST:46.168 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:16:09.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:16:11.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8543" for this suite. Mar 11 14:16:55.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:16:55.740: INFO: namespace kubelet-test-8543 deletion completed in 44.10745009s • [SLOW TEST:46.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:16:55.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 14:16:55.794: INFO: Waiting up to 5m0s for pod "downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb" in namespace "downward-api-499" to be "success or failure" Mar 11 14:16:55.841: INFO: Pod "downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb": Phase="Pending", Reason="", readiness=false. Elapsed: 46.902686ms Mar 11 14:16:57.845: INFO: Pod "downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050505172s STEP: Saw pod success Mar 11 14:16:57.845: INFO: Pod "downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb" satisfied condition "success or failure" Mar 11 14:16:57.848: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb container client-container: STEP: delete the pod Mar 11 14:16:57.865: INFO: Waiting for pod downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb to disappear Mar 11 14:16:57.870: INFO: Pod downwardapi-volume-471d3983-a970-437e-84da-ef5aba5532eb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:16:57.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-499" for this suite. Mar 11 14:17:03.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:17:03.964: INFO: namespace downward-api-499 deletion completed in 6.090304582s • [SLOW TEST:8.224 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:17:03.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Mar 11 14:17:06.613: INFO: Successfully updated pod "annotationupdate41475881-eacf-4675-a3a6-77a74ba7ef10" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:17:10.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9474" for this suite. Mar 11 14:17:32.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:17:32.744: INFO: namespace projected-9474 deletion completed in 22.092458917s • [SLOW TEST:28.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:17:32.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Mar 11 14:17:32.775: INFO: Waiting up to 5m0s for pod "client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5" in namespace "containers-6086" to be "success or failure" Mar 11 14:17:32.792: INFO: Pod "client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.568105ms Mar 11 14:17:34.795: INFO: Pod "client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020470144s STEP: Saw pod success Mar 11 14:17:34.795: INFO: Pod "client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5" satisfied condition "success or failure" Mar 11 14:17:34.797: INFO: Trying to get logs from node iruya-worker pod client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5 container test-container: STEP: delete the pod Mar 11 14:17:34.811: INFO: Waiting for pod client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5 to disappear Mar 11 14:17:34.847: INFO: Pod client-containers-1b803ae9-2a34-4602-a466-de4dc7db8fe5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:17:34.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6086" for this suite. Mar 11 14:17:40.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:17:40.969: INFO: namespace containers-6086 deletion completed in 6.11956728s • [SLOW TEST:8.225 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:17:40.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Mar 11 14:17:41.021: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:17:45.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3082" for this suite. Mar 11 14:17:51.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:17:51.473: INFO: namespace init-container-3082 deletion completed in 6.116960347s • [SLOW TEST:10.503 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:17:51.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-cf2db0e6-21b0-475e-b4bc-0d6e5680c8e5 in namespace container-probe-6101 Mar 11 14:17:53.521: INFO: Started pod busybox-cf2db0e6-21b0-475e-b4bc-0d6e5680c8e5 in namespace container-probe-6101 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 14:17:53.524: INFO: Initial restart count of pod busybox-cf2db0e6-21b0-475e-b4bc-0d6e5680c8e5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:21:54.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6101" for this suite. Mar 11 14:22:00.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:22:00.352: INFO: namespace container-probe-6101 deletion completed in 6.104124253s • [SLOW TEST:248.879 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:22:00.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-fae7e821-a05d-468f-876d-271d19b7ca04 STEP: Creating a pod to test consume configMaps Mar 11 14:22:00.473: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8" in namespace "projected-7389" to be "success or failure" Mar 11 14:22:00.479: INFO: Pod "pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.728834ms Mar 11 14:22:02.482: INFO: Pod "pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8": Phase="Running", Reason="", readiness=true. Elapsed: 2.009528512s Mar 11 14:22:04.486: INFO: Pod "pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013265976s STEP: Saw pod success Mar 11 14:22:04.486: INFO: Pod "pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8" satisfied condition "success or failure" Mar 11 14:22:04.489: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8 container projected-configmap-volume-test: STEP: delete the pod Mar 11 14:22:04.522: INFO: Waiting for pod pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8 to disappear Mar 11 14:22:04.540: INFO: Pod pod-projected-configmaps-d08d4a3f-1f85-4a97-bf95-47ed74323de8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:22:04.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7389" for this suite. Mar 11 14:22:10.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:22:10.632: INFO: namespace projected-7389 deletion completed in 6.088509084s • [SLOW TEST:10.280 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:22:10.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bdc075af-d7a3-47be-92da-81f114bb1cc2 STEP: Creating a pod to test consume configMaps Mar 11 14:22:10.724: INFO: Waiting up to 5m0s for pod "pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285" in namespace "configmap-1268" to be "success or failure" Mar 11 14:22:10.743: INFO: Pod "pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285": Phase="Pending", Reason="", readiness=false. Elapsed: 18.348726ms Mar 11 14:22:12.746: INFO: Pod "pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021457814s STEP: Saw pod success Mar 11 14:22:12.746: INFO: Pod "pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285" satisfied condition "success or failure" Mar 11 14:22:12.748: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285 container configmap-volume-test: STEP: delete the pod Mar 11 14:22:12.775: INFO: Waiting for pod pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285 to disappear Mar 11 14:22:12.784: INFO: Pod pod-configmaps-504cc1ac-446f-4566-b441-ec0ac9954285 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:22:12.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1268" for this suite. Mar 11 14:22:18.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:22:18.870: INFO: namespace configmap-1268 deletion completed in 6.082984241s • [SLOW TEST:8.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:22:18.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9997d6e3-00dd-4e1b-969e-71c2893fc8c6 STEP: Creating a pod to test consume configMaps Mar 11 14:22:18.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30" in namespace "configmap-4762" to be "success or failure" Mar 11 14:22:18.947: INFO: Pod "pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30": Phase="Pending", Reason="", readiness=false. Elapsed: 17.529061ms Mar 11 14:22:20.951: INFO: Pod "pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021455345s STEP: Saw pod success Mar 11 14:22:20.951: INFO: Pod "pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30" satisfied condition "success or failure" Mar 11 14:22:20.954: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30 container configmap-volume-test: STEP: delete the pod Mar 11 14:22:20.979: INFO: Waiting for pod pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30 to disappear Mar 11 14:22:20.988: INFO: Pod pod-configmaps-586854b2-5249-4718-981f-06cfe835cc30 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:22:20.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4762" for this suite. Mar 11 14:22:27.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:22:27.095: INFO: namespace configmap-4762 deletion completed in 6.102969341s • [SLOW TEST:8.224 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:22:27.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4087 I0311 14:22:27.144494 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4087, replica count: 1 I0311 14:22:28.194850 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0311 14:22:29.195098 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 11 14:22:29.322: INFO: Created: latency-svc-4n2zt Mar 11 14:22:29.330: INFO: Got endpoints: latency-svc-4n2zt [35.086929ms] Mar 11 14:22:29.364: INFO: Created: latency-svc-qqq2l Mar 11 14:22:29.366: INFO: Got endpoints: latency-svc-qqq2l [36.245136ms] Mar 11 14:22:29.415: INFO: Created: latency-svc-nkwv8 Mar 11 14:22:29.451: INFO: Created: latency-svc-qw9pf Mar 11 14:22:29.451: INFO: Got endpoints: latency-svc-nkwv8 [121.142554ms] Mar 11 14:22:29.463: INFO: Got endpoints: latency-svc-qw9pf [131.947466ms] Mar 11 14:22:29.495: INFO: Created: latency-svc-mqv7d Mar 11 14:22:29.499: INFO: Got endpoints: latency-svc-mqv7d [169.292463ms] Mar 11 14:22:29.552: INFO: Created: latency-svc-nsz2s Mar 11 14:22:29.559: INFO: Got endpoints: latency-svc-nsz2s [228.588244ms] Mar 11 14:22:29.588: INFO: Created: latency-svc-f9bhk Mar 11 14:22:29.589: INFO: Got endpoints: latency-svc-f9bhk [258.127774ms] Mar 11 14:22:29.613: INFO: Created: latency-svc-6qqdq Mar 11 14:22:29.630: INFO: Got endpoints: latency-svc-6qqdq [299.691888ms] Mar 11 14:22:29.649: INFO: Created: latency-svc-q68lh Mar 11 14:22:29.684: INFO: Got endpoints: latency-svc-q68lh [353.77922ms] Mar 11 14:22:29.686: INFO: Created: latency-svc-7dn8l Mar 11 14:22:29.691: INFO: Got endpoints: latency-svc-7dn8l [360.8033ms] Mar 11 14:22:29.717: INFO: Created: latency-svc-h86dc Mar 11 14:22:29.723: INFO: Got endpoints: latency-svc-h86dc [393.042781ms] Mar 11 14:22:29.741: INFO: Created: latency-svc-jf8vp Mar 11 14:22:29.747: INFO: Got endpoints: latency-svc-jf8vp [416.162863ms] Mar 11 14:22:29.765: INFO: Created: latency-svc-tds26 Mar 11 14:22:29.771: INFO: Got endpoints: latency-svc-tds26 [440.406661ms] Mar 11 14:22:29.815: INFO: Created: latency-svc-2ftq4 Mar 11 14:22:29.818: INFO: Got endpoints: latency-svc-2ftq4 [487.159749ms] Mar 11 14:22:29.847: INFO: Created: latency-svc-2fnwq Mar 11 14:22:29.864: INFO: Got endpoints: latency-svc-2fnwq [534.065688ms] Mar 11 14:22:29.889: INFO: Created: latency-svc-rr5gw Mar 11 14:22:29.909: INFO: Created: latency-svc-nqs7l Mar 11 14:22:29.909: INFO: Got endpoints: latency-svc-rr5gw [578.795236ms] Mar 11 14:22:29.965: INFO: Got endpoints: latency-svc-nqs7l [598.768392ms] Mar 11 14:22:29.966: INFO: Created: latency-svc-ljfh7 Mar 11 14:22:29.971: INFO: Got endpoints: latency-svc-ljfh7 [519.809897ms] Mar 11 14:22:30.001: INFO: Created: latency-svc-sbvvk Mar 11 14:22:30.006: INFO: Got endpoints: latency-svc-sbvvk [543.888503ms] Mar 11 14:22:30.027: INFO: Created: latency-svc-lxj4s Mar 11 14:22:30.032: INFO: Got endpoints: latency-svc-lxj4s [532.415525ms] Mar 11 14:22:30.051: INFO: Created: latency-svc-4x9h7 Mar 11 14:22:30.055: INFO: Got endpoints: latency-svc-4x9h7 [496.016581ms] Mar 11 14:22:30.105: INFO: Created: latency-svc-m7r2z Mar 11 14:22:30.111: INFO: Got endpoints: latency-svc-m7r2z [522.151914ms] Mar 11 14:22:30.155: INFO: Created: latency-svc-z9h7p Mar 11 14:22:30.164: INFO: Got endpoints: latency-svc-z9h7p [533.397084ms] Mar 11 14:22:30.189: INFO: Created: latency-svc-n84m9 Mar 11 14:22:30.194: INFO: Got endpoints: latency-svc-n84m9 [510.251801ms] Mar 11 14:22:30.249: INFO: Created: latency-svc-c4p79 Mar 11 14:22:30.266: INFO: Got endpoints: latency-svc-c4p79 [574.773441ms] Mar 11 14:22:30.300: INFO: Created: latency-svc-q7xwj Mar 11 14:22:30.308: INFO: Got endpoints: latency-svc-q7xwj [584.699574ms] Mar 11 14:22:30.330: INFO: Created: latency-svc-64xp5 Mar 11 14:22:30.391: INFO: Got endpoints: latency-svc-64xp5 [643.834686ms] Mar 11 14:22:30.417: INFO: Created: latency-svc-7r4cc Mar 11 14:22:30.443: INFO: Got endpoints: latency-svc-7r4cc [672.033466ms] Mar 11 14:22:30.474: INFO: Created: latency-svc-9qwmn Mar 11 14:22:30.477: INFO: Got endpoints: latency-svc-9qwmn [659.448584ms] Mar 11 14:22:30.552: INFO: Created: latency-svc-p4qp8 Mar 11 14:22:30.555: INFO: Got endpoints: latency-svc-p4qp8 [690.687423ms] Mar 11 14:22:30.579: INFO: Created: latency-svc-2hjkd Mar 11 14:22:30.582: INFO: Got endpoints: latency-svc-2hjkd [673.267266ms] Mar 11 14:22:30.604: INFO: Created: latency-svc-x8fxc Mar 11 14:22:30.606: INFO: Got endpoints: latency-svc-x8fxc [641.146785ms] Mar 11 14:22:30.641: INFO: Created: latency-svc-fqqx2 Mar 11 14:22:30.647: INFO: Got endpoints: latency-svc-fqqx2 [675.641144ms] Mar 11 14:22:30.720: INFO: Created: latency-svc-tmjfs Mar 11 14:22:30.747: INFO: Got endpoints: latency-svc-tmjfs [740.594735ms] Mar 11 14:22:30.747: INFO: Created: latency-svc-zkvnh Mar 11 14:22:30.755: INFO: Got endpoints: latency-svc-zkvnh [723.216783ms] Mar 11 14:22:30.778: INFO: Created: latency-svc-jkdhg Mar 11 14:22:30.785: INFO: Got endpoints: latency-svc-jkdhg [730.124746ms] Mar 11 14:22:30.803: INFO: Created: latency-svc-cb45t Mar 11 14:22:30.810: INFO: Got endpoints: latency-svc-cb45t [698.775986ms] Mar 11 14:22:30.875: INFO: Created: latency-svc-k8v25 Mar 11 14:22:30.878: INFO: Got endpoints: latency-svc-k8v25 [714.580968ms] Mar 11 14:22:30.900: INFO: Created: latency-svc-lfrg5 Mar 11 14:22:30.907: INFO: Got endpoints: latency-svc-lfrg5 [712.211445ms] Mar 11 14:22:30.928: INFO: Created: latency-svc-4zpgk Mar 11 14:22:30.937: INFO: Got endpoints: latency-svc-4zpgk [670.744125ms] Mar 11 14:22:30.958: INFO: Created: latency-svc-chvsk Mar 11 14:22:30.961: INFO: Got endpoints: latency-svc-chvsk [652.348777ms] Mar 11 14:22:31.038: INFO: Created: latency-svc-mspph Mar 11 14:22:31.061: INFO: Created: latency-svc-4w2gk Mar 11 14:22:31.061: INFO: Got endpoints: latency-svc-mspph [670.934039ms] Mar 11 14:22:31.069: INFO: Got endpoints: latency-svc-4w2gk [626.020006ms] Mar 11 14:22:31.103: INFO: Created: latency-svc-vtf6b Mar 11 14:22:31.112: INFO: Got endpoints: latency-svc-vtf6b [634.309761ms] Mar 11 14:22:31.131: INFO: Created: latency-svc-9tmh4 Mar 11 14:22:31.193: INFO: Got endpoints: latency-svc-9tmh4 [638.097261ms] Mar 11 14:22:31.194: INFO: Created: latency-svc-2v4f7 Mar 11 14:22:31.202: INFO: Got endpoints: latency-svc-2v4f7 [619.25171ms] Mar 11 14:22:31.248: INFO: Created: latency-svc-wzrzg Mar 11 14:22:31.256: INFO: Got endpoints: latency-svc-wzrzg [649.637618ms] Mar 11 14:22:31.290: INFO: Created: latency-svc-74cgg Mar 11 14:22:31.349: INFO: Got endpoints: latency-svc-74cgg [702.035219ms] Mar 11 14:22:31.350: INFO: Created: latency-svc-d66q4 Mar 11 14:22:31.360: INFO: Got endpoints: latency-svc-d66q4 [103.323118ms] Mar 11 14:22:31.380: INFO: Created: latency-svc-rndcs Mar 11 14:22:31.390: INFO: Got endpoints: latency-svc-rndcs [642.692589ms] Mar 11 14:22:31.416: INFO: Created: latency-svc-zhtz5 Mar 11 14:22:31.439: INFO: Got endpoints: latency-svc-zhtz5 [684.24468ms] Mar 11 14:22:31.504: INFO: Created: latency-svc-hnmfr Mar 11 14:22:31.506: INFO: Got endpoints: latency-svc-hnmfr [721.108333ms] Mar 11 14:22:31.548: INFO: Created: latency-svc-mf667 Mar 11 14:22:31.552: INFO: Got endpoints: latency-svc-mf667 [741.984598ms] Mar 11 14:22:31.578: INFO: Created: latency-svc-l2vvv Mar 11 14:22:31.582: INFO: Got endpoints: latency-svc-l2vvv [703.813017ms] Mar 11 14:22:31.602: INFO: Created: latency-svc-vqlgz Mar 11 14:22:31.654: INFO: Got endpoints: latency-svc-vqlgz [747.008723ms] Mar 11 14:22:31.655: INFO: Created: latency-svc-8w5pd Mar 11 14:22:31.661: INFO: Got endpoints: latency-svc-8w5pd [724.225352ms] Mar 11 14:22:31.683: INFO: Created: latency-svc-8wpvr Mar 11 14:22:31.710: INFO: Created: latency-svc-cvcln Mar 11 14:22:31.710: INFO: Got endpoints: latency-svc-8wpvr [749.51224ms] Mar 11 14:22:31.728: INFO: Got endpoints: latency-svc-cvcln [666.037349ms] Mar 11 14:22:31.746: INFO: Created: latency-svc-nrjf4 Mar 11 14:22:31.754: INFO: Got endpoints: latency-svc-nrjf4 [684.86052ms] Mar 11 14:22:31.823: INFO: Created: latency-svc-h2qrs Mar 11 14:22:31.830: INFO: Got endpoints: latency-svc-h2qrs [718.514516ms] Mar 11 14:22:31.857: INFO: Created: latency-svc-59lzc Mar 11 14:22:31.860: INFO: Got endpoints: latency-svc-59lzc [666.47501ms] Mar 11 14:22:31.890: INFO: Created: latency-svc-8zp4p Mar 11 14:22:31.896: INFO: Got endpoints: latency-svc-8zp4p [694.519084ms] Mar 11 14:22:31.979: INFO: Created: latency-svc-wk8r9 Mar 11 14:22:31.980: INFO: Got endpoints: latency-svc-wk8r9 [631.425318ms] Mar 11 14:22:32.009: INFO: Created: latency-svc-zhwtd Mar 11 14:22:32.035: INFO: Got endpoints: latency-svc-zhwtd [675.044223ms] Mar 11 14:22:32.035: INFO: Created: latency-svc-nvscx Mar 11 14:22:32.052: INFO: Got endpoints: latency-svc-nvscx [662.061779ms] Mar 11 14:22:32.076: INFO: Created: latency-svc-kbqj7 Mar 11 14:22:32.127: INFO: Got endpoints: latency-svc-kbqj7 [687.311688ms] Mar 11 14:22:32.139: INFO: Created: latency-svc-pdg2j Mar 11 14:22:32.143: INFO: Got endpoints: latency-svc-pdg2j [636.967422ms] Mar 11 14:22:32.169: INFO: Created: latency-svc-4czwc Mar 11 14:22:32.174: INFO: Got endpoints: latency-svc-4czwc [622.145812ms] Mar 11 14:22:32.196: INFO: Created: latency-svc-94l2w Mar 11 14:22:32.204: INFO: Got endpoints: latency-svc-94l2w [621.876189ms] Mar 11 14:22:32.226: INFO: Created: latency-svc-mzqlr Mar 11 14:22:32.283: INFO: Got endpoints: latency-svc-mzqlr [628.759104ms] Mar 11 14:22:32.284: INFO: Created: latency-svc-g5crc Mar 11 14:22:32.289: INFO: Got endpoints: latency-svc-g5crc [628.134525ms] Mar 11 14:22:32.314: INFO: Created: latency-svc-bxgnd Mar 11 14:22:32.332: INFO: Got endpoints: latency-svc-bxgnd [621.471384ms] Mar 11 14:22:32.364: INFO: Created: latency-svc-bhdqk Mar 11 14:22:32.369: INFO: Got endpoints: latency-svc-bhdqk [641.237149ms] Mar 11 14:22:32.433: INFO: Created: latency-svc-tkd74 Mar 11 14:22:32.440: INFO: Got endpoints: latency-svc-tkd74 [685.824617ms] Mar 11 14:22:32.464: INFO: Created: latency-svc-xr44b Mar 11 14:22:32.470: INFO: Got endpoints: latency-svc-xr44b [640.112599ms] Mar 11 14:22:32.494: INFO: Created: latency-svc-pmq6m Mar 11 14:22:32.496: INFO: Got endpoints: latency-svc-pmq6m [635.934292ms] Mar 11 14:22:32.518: INFO: Created: latency-svc-d6chd Mar 11 14:22:32.520: INFO: Got endpoints: latency-svc-d6chd [623.345092ms] Mar 11 14:22:32.576: INFO: Created: latency-svc-hpsjt Mar 11 14:22:32.598: INFO: Got endpoints: latency-svc-hpsjt [617.879672ms] Mar 11 14:22:32.598: INFO: Created: latency-svc-wdct7 Mar 11 14:22:32.603: INFO: Got endpoints: latency-svc-wdct7 [568.531101ms] Mar 11 14:22:32.622: INFO: Created: latency-svc-56rrw Mar 11 14:22:32.627: INFO: Got endpoints: latency-svc-56rrw [575.248685ms] Mar 11 14:22:32.650: INFO: Created: latency-svc-phnrj Mar 11 14:22:32.658: INFO: Got endpoints: latency-svc-phnrj [531.057089ms] Mar 11 14:22:32.708: INFO: Created: latency-svc-gkxqj Mar 11 14:22:32.730: INFO: Got endpoints: latency-svc-gkxqj [586.572701ms] Mar 11 14:22:32.731: INFO: Created: latency-svc-cbl7p Mar 11 14:22:32.742: INFO: Got endpoints: latency-svc-cbl7p [567.988009ms] Mar 11 14:22:32.766: INFO: Created: latency-svc-kspjg Mar 11 14:22:32.774: INFO: Got endpoints: latency-svc-kspjg [569.893185ms] Mar 11 14:22:32.790: INFO: Created: latency-svc-55ts4 Mar 11 14:22:32.797: INFO: Got endpoints: latency-svc-55ts4 [514.143414ms] Mar 11 14:22:32.851: INFO: Created: latency-svc-7jkxv Mar 11 14:22:32.879: INFO: Got endpoints: latency-svc-7jkxv [589.256313ms] Mar 11 14:22:32.880: INFO: Created: latency-svc-bzdfm Mar 11 14:22:32.881: INFO: Got endpoints: latency-svc-bzdfm [549.60504ms] Mar 11 14:22:32.910: INFO: Created: latency-svc-8pcn2 Mar 11 14:22:32.918: INFO: Got endpoints: latency-svc-8pcn2 [549.138644ms] Mar 11 14:22:32.940: INFO: Created: latency-svc-tf9wg Mar 11 14:22:32.948: INFO: Got endpoints: latency-svc-tf9wg [508.177765ms] Mar 11 14:22:33.007: INFO: Created: latency-svc-9lt62 Mar 11 14:22:33.028: INFO: Got endpoints: latency-svc-9lt62 [557.28164ms] Mar 11 14:22:33.046: INFO: Created: latency-svc-dj899 Mar 11 14:22:33.051: INFO: Got endpoints: latency-svc-dj899 [554.911617ms] Mar 11 14:22:33.069: INFO: Created: latency-svc-2s2qc Mar 11 14:22:33.075: INFO: Got endpoints: latency-svc-2s2qc [555.085696ms] Mar 11 14:22:33.151: INFO: Created: latency-svc-zpqzz Mar 11 14:22:33.181: INFO: Created: latency-svc-jwhkh Mar 11 14:22:33.183: INFO: Got endpoints: latency-svc-zpqzz [584.572081ms] Mar 11 14:22:33.189: INFO: Got endpoints: latency-svc-jwhkh [586.154482ms] Mar 11 14:22:33.208: INFO: Created: latency-svc-sr2lg Mar 11 14:22:33.214: INFO: Got endpoints: latency-svc-sr2lg [586.225508ms] Mar 11 14:22:33.232: INFO: Created: latency-svc-4v9xm Mar 11 14:22:33.250: INFO: Got endpoints: latency-svc-4v9xm [592.260002ms] Mar 11 14:22:33.251: INFO: Created: latency-svc-95rlk Mar 11 14:22:33.300: INFO: Got endpoints: latency-svc-95rlk [570.442673ms] Mar 11 14:22:33.302: INFO: Created: latency-svc-tr4xl Mar 11 14:22:33.305: INFO: Got endpoints: latency-svc-tr4xl [562.4648ms] Mar 11 14:22:33.325: INFO: Created: latency-svc-qbr64 Mar 11 14:22:33.329: INFO: Got endpoints: latency-svc-qbr64 [554.806188ms] Mar 11 14:22:33.347: INFO: Created: latency-svc-2zh84 Mar 11 14:22:33.353: INFO: Got endpoints: latency-svc-2zh84 [556.467815ms] Mar 11 14:22:33.377: INFO: Created: latency-svc-cbfsq Mar 11 14:22:33.383: INFO: Got endpoints: latency-svc-cbfsq [504.777811ms] Mar 11 14:22:33.452: INFO: Created: latency-svc-8fmqk Mar 11 14:22:33.475: INFO: Created: latency-svc-r4sc7 Mar 11 14:22:33.475: INFO: Got endpoints: latency-svc-8fmqk [593.420185ms] Mar 11 14:22:33.481: INFO: Got endpoints: latency-svc-r4sc7 [562.529353ms] Mar 11 14:22:33.510: INFO: Created: latency-svc-nv677 Mar 11 14:22:33.516: INFO: Got endpoints: latency-svc-nv677 [568.289887ms] Mar 11 14:22:33.552: INFO: Created: latency-svc-4sbtz Mar 11 14:22:33.606: INFO: Got endpoints: latency-svc-4sbtz [578.335769ms] Mar 11 14:22:33.607: INFO: Created: latency-svc-nklqn Mar 11 14:22:33.614: INFO: Got endpoints: latency-svc-nklqn [562.960355ms] Mar 11 14:22:33.636: INFO: Created: latency-svc-9r8zq Mar 11 14:22:33.660: INFO: Got endpoints: latency-svc-9r8zq [585.235661ms] Mar 11 14:22:33.678: INFO: Created: latency-svc-4f4nx Mar 11 14:22:33.700: INFO: Got endpoints: latency-svc-4f4nx [517.146574ms] Mar 11 14:22:33.744: INFO: Created: latency-svc-4hz2p Mar 11 14:22:33.760: INFO: Got endpoints: latency-svc-4hz2p [570.717864ms] Mar 11 14:22:33.778: INFO: Created: latency-svc-krcmg Mar 11 14:22:33.798: INFO: Got endpoints: latency-svc-krcmg [584.336392ms] Mar 11 14:22:33.822: INFO: Created: latency-svc-lhxhs Mar 11 14:22:33.893: INFO: Got endpoints: latency-svc-lhxhs [643.191763ms] Mar 11 14:22:33.896: INFO: Created: latency-svc-jc6hf Mar 11 14:22:33.904: INFO: Got endpoints: latency-svc-jc6hf [603.344866ms] Mar 11 14:22:33.922: INFO: Created: latency-svc-hvf5d Mar 11 14:22:33.928: INFO: Got endpoints: latency-svc-hvf5d [623.613638ms] Mar 11 14:22:33.972: INFO: Created: latency-svc-7znwh Mar 11 14:22:33.976: INFO: Got endpoints: latency-svc-7znwh [647.300136ms] Mar 11 14:22:34.044: INFO: Created: latency-svc-4zfv4 Mar 11 14:22:34.046: INFO: Got endpoints: latency-svc-4zfv4 [692.809191ms] Mar 11 14:22:34.072: INFO: Created: latency-svc-jmj6g Mar 11 14:22:34.079: INFO: Got endpoints: latency-svc-jmj6g [695.532958ms] Mar 11 14:22:34.097: INFO: Created: latency-svc-hcfv4 Mar 11 14:22:34.103: INFO: Got endpoints: latency-svc-hcfv4 [628.307254ms] Mar 11 14:22:34.120: INFO: Created: latency-svc-glf6l Mar 11 14:22:34.127: INFO: Got endpoints: latency-svc-glf6l [646.796711ms] Mar 11 14:22:34.187: INFO: Created: latency-svc-fgtvd Mar 11 14:22:34.213: INFO: Got endpoints: latency-svc-fgtvd [696.791363ms] Mar 11 14:22:34.214: INFO: Created: latency-svc-jkqlk Mar 11 14:22:34.218: INFO: Got endpoints: latency-svc-jkqlk [612.368604ms] Mar 11 14:22:34.240: INFO: Created: latency-svc-ffthj Mar 11 14:22:34.248: INFO: Got endpoints: latency-svc-ffthj [634.369441ms] Mar 11 14:22:34.282: INFO: Created: latency-svc-tqrjv Mar 11 14:22:34.349: INFO: Got endpoints: latency-svc-tqrjv [689.090633ms] Mar 11 14:22:34.351: INFO: Created: latency-svc-d47lj Mar 11 14:22:34.363: INFO: Got endpoints: latency-svc-d47lj [662.900054ms] Mar 11 14:22:34.393: INFO: Created: latency-svc-vvtvz Mar 11 14:22:34.394: INFO: Got endpoints: latency-svc-vvtvz [634.092853ms] Mar 11 14:22:34.432: INFO: Created: latency-svc-vrgv4 Mar 11 14:22:34.442: INFO: Got endpoints: latency-svc-vrgv4 [643.757488ms] Mar 11 14:22:34.511: INFO: Created: latency-svc-h2j62 Mar 11 14:22:34.530: INFO: Got endpoints: latency-svc-h2j62 [636.728721ms] Mar 11 14:22:34.535: INFO: Created: latency-svc-hlr89 Mar 11 14:22:34.558: INFO: Created: latency-svc-clw7s Mar 11 14:22:34.559: INFO: Got endpoints: latency-svc-hlr89 [655.03957ms] Mar 11 14:22:34.563: INFO: Got endpoints: latency-svc-clw7s [634.170959ms] Mar 11 14:22:34.582: INFO: Created: latency-svc-gbk9l Mar 11 14:22:34.587: INFO: Got endpoints: latency-svc-gbk9l [610.436818ms] Mar 11 14:22:34.607: INFO: Created: latency-svc-xvqrq Mar 11 14:22:34.642: INFO: Got endpoints: latency-svc-xvqrq [595.570499ms] Mar 11 14:22:34.657: INFO: Created: latency-svc-h6k9p Mar 11 14:22:34.666: INFO: Got endpoints: latency-svc-h6k9p [586.555335ms] Mar 11 14:22:34.700: INFO: Created: latency-svc-xttg4 Mar 11 14:22:34.732: INFO: Got endpoints: latency-svc-xttg4 [629.303987ms] Mar 11 14:22:34.734: INFO: Created: latency-svc-f9rgl Mar 11 14:22:34.779: INFO: Got endpoints: latency-svc-f9rgl [651.814279ms] Mar 11 14:22:34.801: INFO: Created: latency-svc-4fddn Mar 11 14:22:34.804: INFO: Got endpoints: latency-svc-4fddn [590.722056ms] Mar 11 14:22:34.825: INFO: Created: latency-svc-p5b9t Mar 11 14:22:34.848: INFO: Got endpoints: latency-svc-p5b9t [629.928213ms] Mar 11 14:22:34.877: INFO: Created: latency-svc-7thp6 Mar 11 14:22:34.929: INFO: Got endpoints: latency-svc-7thp6 [680.968436ms] Mar 11 14:22:34.931: INFO: Created: latency-svc-tr4f7 Mar 11 14:22:34.937: INFO: Got endpoints: latency-svc-tr4f7 [587.777446ms] Mar 11 14:22:34.957: INFO: Created: latency-svc-8lv9n Mar 11 14:22:34.962: INFO: Got endpoints: latency-svc-8lv9n [598.668226ms] Mar 11 14:22:34.981: INFO: Created: latency-svc-gvr8f Mar 11 14:22:34.986: INFO: Got endpoints: latency-svc-gvr8f [591.344977ms] Mar 11 14:22:35.006: INFO: Created: latency-svc-cvqwp Mar 11 14:22:35.010: INFO: Got endpoints: latency-svc-cvqwp [568.137451ms] Mar 11 14:22:35.075: INFO: Created: latency-svc-rgd4f Mar 11 14:22:35.077: INFO: Got endpoints: latency-svc-rgd4f [547.107947ms] Mar 11 14:22:35.105: INFO: Created: latency-svc-c4zq4 Mar 11 14:22:35.113: INFO: Got endpoints: latency-svc-c4zq4 [553.640292ms] Mar 11 14:22:35.135: INFO: Created: latency-svc-xf4rv Mar 11 14:22:35.143: INFO: Got endpoints: latency-svc-xf4rv [580.262473ms] Mar 11 14:22:35.161: INFO: Created: latency-svc-lc4qq Mar 11 14:22:35.167: INFO: Got endpoints: latency-svc-lc4qq [580.300842ms] Mar 11 14:22:35.230: INFO: Created: latency-svc-dxjnx Mar 11 14:22:35.254: INFO: Created: latency-svc-5lp2h Mar 11 14:22:35.255: INFO: Got endpoints: latency-svc-dxjnx [613.154904ms] Mar 11 14:22:35.258: INFO: Got endpoints: latency-svc-5lp2h [592.107551ms] Mar 11 14:22:35.278: INFO: Created: latency-svc-drw9b Mar 11 14:22:35.282: INFO: Got endpoints: latency-svc-drw9b [549.616508ms] Mar 11 14:22:35.303: INFO: Created: latency-svc-b47nw Mar 11 14:22:35.306: INFO: Got endpoints: latency-svc-b47nw [527.035243ms] Mar 11 14:22:35.330: INFO: Created: latency-svc-n9h2x Mar 11 14:22:35.390: INFO: Got endpoints: latency-svc-n9h2x [586.122273ms] Mar 11 14:22:35.392: INFO: Created: latency-svc-p5v87 Mar 11 14:22:35.398: INFO: Got endpoints: latency-svc-p5v87 [549.431757ms] Mar 11 14:22:35.416: INFO: Created: latency-svc-8v5cc Mar 11 14:22:35.453: INFO: Created: latency-svc-gjvkr Mar 11 14:22:35.453: INFO: Got endpoints: latency-svc-8v5cc [523.771717ms] Mar 11 14:22:35.473: INFO: Got endpoints: latency-svc-gjvkr [535.894706ms] Mar 11 14:22:35.546: INFO: Created: latency-svc-7ntz4 Mar 11 14:22:35.548: INFO: Got endpoints: latency-svc-7ntz4 [586.177836ms] Mar 11 14:22:35.591: INFO: Created: latency-svc-8ppr7 Mar 11 14:22:35.690: INFO: Got endpoints: latency-svc-8ppr7 [703.838369ms] Mar 11 14:22:35.707: INFO: Created: latency-svc-km2d9 Mar 11 14:22:35.753: INFO: Got endpoints: latency-svc-km2d9 [742.506562ms] Mar 11 14:22:35.773: INFO: Created: latency-svc-dqf79 Mar 11 14:22:35.785: INFO: Got endpoints: latency-svc-dqf79 [707.212605ms] Mar 11 14:22:35.827: INFO: Created: latency-svc-bblrj Mar 11 14:22:35.836: INFO: Got endpoints: latency-svc-bblrj [723.666506ms] Mar 11 14:22:35.858: INFO: Created: latency-svc-vbrh4 Mar 11 14:22:35.861: INFO: Got endpoints: latency-svc-vbrh4 [718.222378ms] Mar 11 14:22:35.888: INFO: Created: latency-svc-8clx2 Mar 11 14:22:35.892: INFO: Got endpoints: latency-svc-8clx2 [724.679835ms] Mar 11 14:22:35.911: INFO: Created: latency-svc-knbzx Mar 11 14:22:35.916: INFO: Got endpoints: latency-svc-knbzx [661.04486ms] Mar 11 14:22:35.971: INFO: Created: latency-svc-tvkp4 Mar 11 14:22:35.976: INFO: Got endpoints: latency-svc-tvkp4 [718.094357ms] Mar 11 14:22:36.002: INFO: Created: latency-svc-fgmbf Mar 11 14:22:36.007: INFO: Got endpoints: latency-svc-fgmbf [724.649589ms] Mar 11 14:22:36.037: INFO: Created: latency-svc-cvk99 Mar 11 14:22:36.044: INFO: Got endpoints: latency-svc-cvk99 [737.366616ms] Mar 11 14:22:36.065: INFO: Created: latency-svc-wp8t7 Mar 11 14:22:36.109: INFO: Got endpoints: latency-svc-wp8t7 [718.408177ms] Mar 11 14:22:36.110: INFO: Created: latency-svc-ft522 Mar 11 14:22:36.115: INFO: Got endpoints: latency-svc-ft522 [717.569366ms] Mar 11 14:22:36.137: INFO: Created: latency-svc-76gmc Mar 11 14:22:36.146: INFO: Got endpoints: latency-svc-76gmc [692.822998ms] Mar 11 14:22:36.164: INFO: Created: latency-svc-fmcpt Mar 11 14:22:36.187: INFO: Created: latency-svc-zc5hj Mar 11 14:22:36.187: INFO: Got endpoints: latency-svc-fmcpt [713.984735ms] Mar 11 14:22:36.241: INFO: Got endpoints: latency-svc-zc5hj [693.072798ms] Mar 11 14:22:36.241: INFO: Created: latency-svc-f6hr2 Mar 11 14:22:36.263: INFO: Created: latency-svc-rr4cq Mar 11 14:22:36.299: INFO: Created: latency-svc-pvpcc Mar 11 14:22:36.300: INFO: Got endpoints: latency-svc-f6hr2 [609.86563ms] Mar 11 14:22:36.372: INFO: Created: latency-svc-vrp2m Mar 11 14:22:36.373: INFO: Got endpoints: latency-svc-rr4cq [620.516136ms] Mar 11 14:22:36.403: INFO: Got endpoints: latency-svc-pvpcc [618.726518ms] Mar 11 14:22:36.461: INFO: Created: latency-svc-kr52j Mar 11 14:22:36.461: INFO: Got endpoints: latency-svc-vrp2m [624.71511ms] Mar 11 14:22:36.517: INFO: Created: latency-svc-6j92k Mar 11 14:22:36.517: INFO: Got endpoints: latency-svc-kr52j [656.016931ms] Mar 11 14:22:36.542: INFO: Got endpoints: latency-svc-6j92k [650.228812ms] Mar 11 14:22:36.542: INFO: Created: latency-svc-q5vpw Mar 11 14:22:36.593: INFO: Got endpoints: latency-svc-q5vpw [676.790787ms] Mar 11 14:22:36.593: INFO: Created: latency-svc-wjrdz Mar 11 14:22:36.660: INFO: Got endpoints: latency-svc-wjrdz [683.762524ms] Mar 11 14:22:36.660: INFO: Created: latency-svc-xcjlp Mar 11 14:22:36.686: INFO: Got endpoints: latency-svc-xcjlp [678.824799ms] Mar 11 14:22:36.686: INFO: Created: latency-svc-5k96j Mar 11 14:22:36.716: INFO: Created: latency-svc-22gkk Mar 11 14:22:36.739: INFO: Created: latency-svc-c5bhd Mar 11 14:22:36.740: INFO: Got endpoints: latency-svc-5k96j [695.836947ms] Mar 11 14:22:36.798: INFO: Got endpoints: latency-svc-22gkk [688.868981ms] Mar 11 14:22:36.804: INFO: Created: latency-svc-zkw7p Mar 11 14:22:36.841: INFO: Got endpoints: latency-svc-c5bhd [725.699394ms] Mar 11 14:22:36.842: INFO: Created: latency-svc-dqcdt Mar 11 14:22:36.887: INFO: Got endpoints: latency-svc-zkw7p [741.13267ms] Mar 11 14:22:36.888: INFO: Created: latency-svc-bp7gl Mar 11 14:22:36.941: INFO: Created: latency-svc-qf9b4 Mar 11 14:22:36.943: INFO: Got endpoints: latency-svc-dqcdt [755.736821ms] Mar 11 14:22:36.979: INFO: Got endpoints: latency-svc-bp7gl [738.424702ms] Mar 11 14:22:36.980: INFO: Created: latency-svc-t9f4t Mar 11 14:22:37.004: INFO: Created: latency-svc-6dbzz Mar 11 14:22:37.032: INFO: Got endpoints: latency-svc-qf9b4 [732.156696ms] Mar 11 14:22:37.032: INFO: Created: latency-svc-qhd2q Mar 11 14:22:37.085: INFO: Got endpoints: latency-svc-t9f4t [711.543755ms] Mar 11 14:22:37.085: INFO: Created: latency-svc-vxvgj Mar 11 14:22:37.109: INFO: Created: latency-svc-vbf7n Mar 11 14:22:37.125: INFO: Got endpoints: latency-svc-6dbzz [721.324891ms] Mar 11 14:22:37.148: INFO: Created: latency-svc-ng6nh Mar 11 14:22:37.178: INFO: Created: latency-svc-t8cbd Mar 11 14:22:37.178: INFO: Got endpoints: latency-svc-qhd2q [716.719985ms] Mar 11 14:22:37.241: INFO: Got endpoints: latency-svc-vxvgj [723.640674ms] Mar 11 14:22:37.259: INFO: Created: latency-svc-kfsh8 Mar 11 14:22:37.286: INFO: Got endpoints: latency-svc-vbf7n [743.367507ms] Mar 11 14:22:37.286: INFO: Created: latency-svc-stqlv Mar 11 14:22:37.322: INFO: Created: latency-svc-fddgw Mar 11 14:22:37.372: INFO: Got endpoints: latency-svc-ng6nh [779.362785ms] Mar 11 14:22:37.372: INFO: Created: latency-svc-d2fck Mar 11 14:22:37.374: INFO: Got endpoints: latency-svc-t8cbd [714.531399ms] Mar 11 14:22:37.410: INFO: Created: latency-svc-2pqhc Mar 11 14:22:37.434: INFO: Created: latency-svc-ncldx Mar 11 14:22:37.434: INFO: Got endpoints: latency-svc-kfsh8 [748.124559ms] Mar 11 14:22:37.516: INFO: Created: latency-svc-d528x Mar 11 14:22:37.516: INFO: Got endpoints: latency-svc-stqlv [776.685741ms] Mar 11 14:22:37.541: INFO: Got endpoints: latency-svc-fddgw [743.47393ms] Mar 11 14:22:37.542: INFO: Created: latency-svc-4gfp5 Mar 11 14:22:37.565: INFO: Created: latency-svc-xcb7n Mar 11 14:22:37.590: INFO: Created: latency-svc-nm527 Mar 11 14:22:37.590: INFO: Got endpoints: latency-svc-d2fck [748.792319ms] Mar 11 14:22:37.648: INFO: Got endpoints: latency-svc-2pqhc [760.533349ms] Mar 11 14:22:37.674: INFO: Got endpoints: latency-svc-ncldx [731.495333ms] Mar 11 14:22:37.724: INFO: Got endpoints: latency-svc-d528x [745.018211ms] Mar 11 14:22:37.779: INFO: Got endpoints: latency-svc-4gfp5 [747.594875ms] Mar 11 14:22:37.824: INFO: Got endpoints: latency-svc-xcb7n [739.177472ms] Mar 11 14:22:37.874: INFO: Got endpoints: latency-svc-nm527 [749.143325ms] Mar 11 14:22:37.874: INFO: Latencies: [36.245136ms 103.323118ms 121.142554ms 131.947466ms 169.292463ms 228.588244ms 258.127774ms 299.691888ms 353.77922ms 360.8033ms 393.042781ms 416.162863ms 440.406661ms 487.159749ms 496.016581ms 504.777811ms 508.177765ms 510.251801ms 514.143414ms 517.146574ms 519.809897ms 522.151914ms 523.771717ms 527.035243ms 531.057089ms 532.415525ms 533.397084ms 534.065688ms 535.894706ms 543.888503ms 547.107947ms 549.138644ms 549.431757ms 549.60504ms 549.616508ms 553.640292ms 554.806188ms 554.911617ms 555.085696ms 556.467815ms 557.28164ms 562.4648ms 562.529353ms 562.960355ms 567.988009ms 568.137451ms 568.289887ms 568.531101ms 569.893185ms 570.442673ms 570.717864ms 574.773441ms 575.248685ms 578.335769ms 578.795236ms 580.262473ms 580.300842ms 584.336392ms 584.572081ms 584.699574ms 585.235661ms 586.122273ms 586.154482ms 586.177836ms 586.225508ms 586.555335ms 586.572701ms 587.777446ms 589.256313ms 590.722056ms 591.344977ms 592.107551ms 592.260002ms 593.420185ms 595.570499ms 598.668226ms 598.768392ms 603.344866ms 609.86563ms 610.436818ms 612.368604ms 613.154904ms 617.879672ms 618.726518ms 619.25171ms 620.516136ms 621.471384ms 621.876189ms 622.145812ms 623.345092ms 623.613638ms 624.71511ms 626.020006ms 628.134525ms 628.307254ms 628.759104ms 629.303987ms 629.928213ms 631.425318ms 634.092853ms 634.170959ms 634.309761ms 634.369441ms 635.934292ms 636.728721ms 636.967422ms 638.097261ms 640.112599ms 641.146785ms 641.237149ms 642.692589ms 643.191763ms 643.757488ms 643.834686ms 646.796711ms 647.300136ms 649.637618ms 650.228812ms 651.814279ms 652.348777ms 655.03957ms 656.016931ms 659.448584ms 661.04486ms 662.061779ms 662.900054ms 666.037349ms 666.47501ms 670.744125ms 670.934039ms 672.033466ms 673.267266ms 675.044223ms 675.641144ms 676.790787ms 678.824799ms 680.968436ms 683.762524ms 684.24468ms 684.86052ms 685.824617ms 687.311688ms 688.868981ms 689.090633ms 690.687423ms 692.809191ms 692.822998ms 693.072798ms 694.519084ms 695.532958ms 695.836947ms 696.791363ms 698.775986ms 702.035219ms 703.813017ms 703.838369ms 707.212605ms 711.543755ms 712.211445ms 713.984735ms 714.531399ms 714.580968ms 716.719985ms 717.569366ms 718.094357ms 718.222378ms 718.408177ms 718.514516ms 721.108333ms 721.324891ms 723.216783ms 723.640674ms 723.666506ms 724.225352ms 724.649589ms 724.679835ms 725.699394ms 730.124746ms 731.495333ms 732.156696ms 737.366616ms 738.424702ms 739.177472ms 740.594735ms 741.13267ms 741.984598ms 742.506562ms 743.367507ms 743.47393ms 745.018211ms 747.008723ms 747.594875ms 748.124559ms 748.792319ms 749.143325ms 749.51224ms 755.736821ms 760.533349ms 776.685741ms 779.362785ms] Mar 11 14:22:37.874: INFO: 50 %ile: 634.170959ms Mar 11 14:22:37.874: INFO: 90 %ile: 737.366616ms Mar 11 14:22:37.874: INFO: 99 %ile: 776.685741ms Mar 11 14:22:37.874: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:22:37.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4087" for this suite. Mar 11 14:22:55.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:22:55.961: INFO: namespace svc-latency-4087 deletion completed in 18.083239086s • [SLOW TEST:28.866 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:22:55.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-affe7d68-a943-47c8-8469-8e70894720ed in namespace container-probe-5065 Mar 11 14:22:58.056: INFO: Started pod test-webserver-affe7d68-a943-47c8-8469-8e70894720ed in namespace container-probe-5065 STEP: checking the pod's current state and verifying that restartCount is present Mar 11 14:22:58.058: INFO: Initial restart count of pod test-webserver-affe7d68-a943-47c8-8469-8e70894720ed is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:26:59.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5065" for this suite. Mar 11 14:27:05.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:05.478: INFO: namespace container-probe-5065 deletion completed in 6.097131241s • [SLOW TEST:249.517 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:27:05.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 14:27:05.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff" in namespace "projected-8322" to be "success or failure" Mar 11 14:27:05.535: INFO: Pod "downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.475839ms Mar 11 14:27:07.538: INFO: Pod "downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007134888s STEP: Saw pod success Mar 11 14:27:07.538: INFO: Pod "downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff" satisfied condition "success or failure" Mar 11 14:27:07.541: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff container client-container: STEP: delete the pod Mar 11 14:27:07.560: INFO: Waiting for pod downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff to disappear Mar 11 14:27:07.571: INFO: Pod downwardapi-volume-8daaa17d-2834-4cc0-abf3-fd85f02719ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:27:07.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8322" for this suite. Mar 11 14:27:13.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:13.650: INFO: namespace projected-8322 deletion completed in 6.07645565s • [SLOW TEST:8.172 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:27:13.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-23fc0e2d-6a3c-4c65-ab6f-bce53a7f921b STEP: Creating a pod to test consume configMaps Mar 11 14:27:13.710: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d" in namespace "configmap-5287" to be "success or failure" Mar 11 14:27:13.714: INFO: Pod "pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.918487ms Mar 11 14:27:15.718: INFO: Pod "pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007804877s STEP: Saw pod success Mar 11 14:27:15.718: INFO: Pod "pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d" satisfied condition "success or failure" Mar 11 14:27:15.721: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d container configmap-volume-test: STEP: delete the pod Mar 11 14:27:15.753: INFO: Waiting for pod pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d to disappear Mar 11 14:27:15.763: INFO: Pod pod-configmaps-0d1648c9-8430-42a4-a003-814bfb26a90d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:27:15.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5287" for this suite. Mar 11 14:27:21.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:21.860: INFO: namespace configmap-5287 deletion completed in 6.093825274s • [SLOW TEST:8.209 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:27:21.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 11 14:27:21.914: INFO: Waiting up to 5m0s for pod "pod-143aa760-ecdb-4136-a65a-2843e55ede19" in namespace "emptydir-4926" to be "success or failure" Mar 11 14:27:21.933: INFO: Pod "pod-143aa760-ecdb-4136-a65a-2843e55ede19": Phase="Pending", Reason="", readiness=false. Elapsed: 19.335913ms Mar 11 14:27:23.937: INFO: Pod "pod-143aa760-ecdb-4136-a65a-2843e55ede19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023204304s Mar 11 14:27:25.947: INFO: Pod "pod-143aa760-ecdb-4136-a65a-2843e55ede19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032835329s STEP: Saw pod success Mar 11 14:27:25.947: INFO: Pod "pod-143aa760-ecdb-4136-a65a-2843e55ede19" satisfied condition "success or failure" Mar 11 14:27:25.949: INFO: Trying to get logs from node iruya-worker2 pod pod-143aa760-ecdb-4136-a65a-2843e55ede19 container test-container: STEP: delete the pod Mar 11 14:27:25.983: INFO: Waiting for pod pod-143aa760-ecdb-4136-a65a-2843e55ede19 to disappear Mar 11 14:27:25.991: INFO: Pod pod-143aa760-ecdb-4136-a65a-2843e55ede19 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:27:25.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4926" for this suite. Mar 11 14:27:32.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:32.115: INFO: namespace emptydir-4926 deletion completed in 6.121070325s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:27:32.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 11 14:27:34.202: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:27:34.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1594" for this suite. Mar 11 14:27:40.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:40.348: INFO: namespace container-runtime-1594 deletion completed in 6.088884487s • [SLOW TEST:8.232 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Mar 11 14:27:40.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Mar 11 14:27:40.396: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338" in namespace "projected-2275" to be "success or failure" Mar 11 14:27:40.398: INFO: Pod "downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338": Phase="Pending", Reason="", readiness=false. Elapsed: 1.713953ms Mar 11 14:27:42.401: INFO: Pod "downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004834447s STEP: Saw pod success Mar 11 14:27:42.401: INFO: Pod "downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338" satisfied condition "success or failure" Mar 11 14:27:42.403: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338 container client-container: STEP: delete the pod Mar 11 14:27:42.417: INFO: Waiting for pod downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338 to disappear Mar 11 14:27:42.428: INFO: Pod downwardapi-volume-8897fa4b-406b-443f-a426-55012e09b338 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Mar 11 14:27:42.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2275" for this suite. Mar 11 14:27:48.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 11 14:27:48.515: INFO: namespace projected-2275 deletion completed in 6.084494919s • [SLOW TEST:8.166 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSMar 11 14:27:48.515: INFO: Running AfterSuite actions on all nodes Mar 11 14:27:48.515: INFO: Running AfterSuite actions on node 1 Mar 11 14:27:48.515: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 5544.540 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS