I0401 12:55:44.024295 6 e2e.go:243] Starting e2e run "976f6ba7-2add-4c0c-886c-816693bc9320" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585745743 - Will randomize all specs Will run 215 of 4412 specs Apr 1 12:55:44.205: INFO: >>> kubeConfig: /root/.kube/config Apr 1 12:55:44.208: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 1 12:55:44.234: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 1 12:55:44.263: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 1 12:55:44.263: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 1 12:55:44.263: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 1 12:55:44.279: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 1 12:55:44.279: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 1 12:55:44.279: INFO: e2e test version: v1.15.10 Apr 1 12:55:44.280: INFO: kube-apiserver version: v1.15.7 SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:55:44.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Apr 1 12:55:44.352: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 12:55:44.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-8340' Apr 1 12:55:46.595: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 1 12:55:46.595: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 1 12:55:46.624: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-trxhh] Apr 1 12:55:46.624: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-trxhh" in namespace "kubectl-8340" to be "running and ready" Apr 1 12:55:46.627: INFO: Pod "e2e-test-nginx-rc-trxhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37183ms Apr 1 12:55:48.656: INFO: Pod "e2e-test-nginx-rc-trxhh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031652358s Apr 1 12:55:50.660: INFO: Pod "e2e-test-nginx-rc-trxhh": Phase="Running", Reason="", readiness=true. Elapsed: 4.035815432s Apr 1 12:55:50.660: INFO: Pod "e2e-test-nginx-rc-trxhh" satisfied condition "running and ready" Apr 1 12:55:50.660: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-trxhh] Apr 1 12:55:50.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-8340' Apr 1 12:55:50.776: INFO: stderr: "" Apr 1 12:55:50.776: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 1 12:55:50.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-8340' Apr 1 12:55:50.880: INFO: stderr: "" Apr 1 12:55:50.880: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:55:50.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8340" for this suite. Apr 1 12:56:12.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:56:12.965: INFO: namespace kubectl-8340 deletion completed in 22.082180294s • [SLOW TEST:28.685 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:56:12.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 12:56:13.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4050' Apr 1 12:56:13.121: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 1 12:56:13.121: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 1 12:56:15.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4050' Apr 1 12:56:15.310: INFO: stderr: "" Apr 1 12:56:15.310: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:56:15.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4050" for this suite. Apr 1 12:58:17.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:58:17.423: INFO: namespace kubectl-4050 deletion completed in 2m2.109300654s • [SLOW TEST:124.457 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:58:17.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5281 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5281 STEP: Creating statefulset with conflicting port in namespace statefulset-5281 STEP: Waiting until pod test-pod will start running in namespace statefulset-5281 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5281 Apr 1 12:58:21.566: INFO: Observed stateful pod in namespace: statefulset-5281, name: ss-0, uid: a9d5b152-d9bb-4fae-aabf-9d3b79c20b4f, status phase: Pending. Waiting for statefulset controller to delete. Apr 1 12:58:22.099: INFO: Observed stateful pod in namespace: statefulset-5281, name: ss-0, uid: a9d5b152-d9bb-4fae-aabf-9d3b79c20b4f, status phase: Failed. Waiting for statefulset controller to delete. Apr 1 12:58:22.107: INFO: Observed stateful pod in namespace: statefulset-5281, name: ss-0, uid: a9d5b152-d9bb-4fae-aabf-9d3b79c20b4f, status phase: Failed. Waiting for statefulset controller to delete. Apr 1 12:58:22.127: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5281 STEP: Removing pod with conflicting port in namespace statefulset-5281 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5281 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 1 12:58:28.171: INFO: Deleting all statefulset in ns statefulset-5281 Apr 1 12:58:28.174: INFO: Scaling statefulset ss to 0 Apr 1 12:58:38.189: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 12:58:38.192: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:58:38.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5281" for this suite. Apr 1 12:58:44.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:58:44.300: INFO: namespace statefulset-5281 deletion completed in 6.092543467s • [SLOW TEST:26.877 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:58:44.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0401 12:59:14.903611 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 12:59:14.903: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:59:14.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4898" for this suite. Apr 1 12:59:20.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:59:20.989: INFO: namespace gc-4898 deletion completed in 6.083071633s • [SLOW TEST:36.688 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:59:20.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:59:21.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6702" for this suite. Apr 1 12:59:27.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:59:27.310: INFO: namespace kubelet-test-6702 deletion completed in 6.093986656s • [SLOW TEST:6.320 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:59:27.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0401 12:59:38.558772 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 12:59:38.558: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 12:59:38.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9878" for this suite. Apr 1 12:59:46.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 12:59:46.652: INFO: namespace gc-9878 deletion completed in 8.090230704s • [SLOW TEST:19.343 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 12:59:46.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 12:59:46.761: INFO: Create a RollingUpdate DaemonSet Apr 1 12:59:46.765: INFO: Check that daemon pods launch on every node of the cluster Apr 1 12:59:46.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 12:59:46.773: INFO: Number of nodes with available pods: 0 Apr 1 12:59:46.773: INFO: Node iruya-worker is running more than one daemon pod Apr 1 12:59:47.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 12:59:47.782: INFO: Number of nodes with available pods: 0 Apr 1 12:59:47.782: INFO: Node iruya-worker is running more than one daemon pod Apr 1 12:59:48.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 12:59:48.782: INFO: Number of nodes with available pods: 0 Apr 1 12:59:48.782: INFO: Node iruya-worker is running more than one daemon pod Apr 1 12:59:49.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 12:59:49.782: INFO: Number of nodes with available pods: 1 Apr 1 12:59:49.782: INFO: Node iruya-worker is running more than one daemon pod Apr 1 12:59:50.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 12:59:50.782: INFO: Number of nodes with available pods: 2 Apr 1 12:59:50.782: INFO: Number of running nodes: 2, number of available pods: 2 Apr 1 12:59:50.782: INFO: Update the DaemonSet to trigger a rollout Apr 1 12:59:50.790: INFO: Updating DaemonSet daemon-set Apr 1 13:00:02.810: INFO: Roll back the DaemonSet before rollout is complete Apr 1 13:00:02.815: INFO: Updating DaemonSet daemon-set Apr 1 13:00:02.816: INFO: Make sure DaemonSet rollback is complete Apr 1 13:00:02.894: INFO: Wrong image for pod: daemon-set-4f7mw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 1 13:00:02.894: INFO: Pod daemon-set-4f7mw is not available Apr 1 13:00:02.923: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:00:03.927: INFO: Wrong image for pod: daemon-set-4f7mw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 1 13:00:03.928: INFO: Pod daemon-set-4f7mw is not available Apr 1 13:00:03.931: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:00:04.928: INFO: Pod daemon-set-j5w7m is not available Apr 1 13:00:04.932: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5419, will wait for the garbage collector to delete the pods Apr 1 13:00:04.999: INFO: Deleting DaemonSet.extensions daemon-set took: 6.214924ms Apr 1 13:00:05.299: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.284754ms Apr 1 13:00:12.202: INFO: Number of nodes with available pods: 0 Apr 1 13:00:12.202: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 13:00:12.207: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5419/daemonsets","resourceVersion":"3030597"},"items":null} Apr 1 13:00:12.222: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5419/pods","resourceVersion":"3030597"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:00:12.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5419" for this suite. Apr 1 13:00:18.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:00:18.327: INFO: namespace daemonsets-5419 deletion completed in 6.090928293s • [SLOW TEST:31.674 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:00:18.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 1 13:00:18.387: INFO: Waiting up to 5m0s for pod "downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9" in namespace "downward-api-6190" to be "success or failure" Apr 1 13:00:18.390: INFO: Pod "downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.516329ms Apr 1 13:00:20.394: INFO: Pod "downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006956036s Apr 1 13:00:22.423: INFO: Pod "downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035577981s STEP: Saw pod success Apr 1 13:00:22.423: INFO: Pod "downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9" satisfied condition "success or failure" Apr 1 13:00:22.425: INFO: Trying to get logs from node iruya-worker pod downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9 container dapi-container: STEP: delete the pod Apr 1 13:00:22.447: INFO: Waiting for pod downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9 to disappear Apr 1 13:00:22.457: INFO: Pod downward-api-d207d299-0ac4-4afc-a18b-408c71458bf9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:00:22.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6190" for this suite. Apr 1 13:00:28.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:00:28.585: INFO: namespace downward-api-6190 deletion completed in 6.125069695s • [SLOW TEST:10.258 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:00:28.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-77ed2bc2-bf05-4734-8d47-d8a30dd67d3f STEP: Creating a pod to test consume configMaps Apr 1 13:00:28.672: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357" in namespace "configmap-2962" to be "success or failure" Apr 1 13:00:28.679: INFO: Pod "pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357": Phase="Pending", Reason="", readiness=false. Elapsed: 7.072854ms Apr 1 13:00:30.720: INFO: Pod "pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048283868s Apr 1 13:00:32.724: INFO: Pod "pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05231568s STEP: Saw pod success Apr 1 13:00:32.724: INFO: Pod "pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357" satisfied condition "success or failure" Apr 1 13:00:32.727: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357 container configmap-volume-test: STEP: delete the pod Apr 1 13:00:32.835: INFO: Waiting for pod pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357 to disappear Apr 1 13:00:32.840: INFO: Pod pod-configmaps-e9dd3109-36a3-4d24-b0fe-6e80dc55c357 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:00:32.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2962" for this suite. Apr 1 13:00:38.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:00:38.939: INFO: namespace configmap-2962 deletion completed in 6.094930785s • [SLOW TEST:10.353 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:00:38.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 1 13:00:38.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1371' Apr 1 13:00:39.331: INFO: stderr: "" Apr 1 13:00:39.331: INFO: stdout: "pod/pause created\n" Apr 1 13:00:39.331: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 1 13:00:39.331: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1371" to be "running and ready" Apr 1 13:00:39.354: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.498702ms Apr 1 13:00:41.359: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02770683s Apr 1 13:00:43.362: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.031301425s Apr 1 13:00:43.362: INFO: Pod "pause" satisfied condition "running and ready" Apr 1 13:00:43.362: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 1 13:00:43.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1371' Apr 1 13:00:43.480: INFO: stderr: "" Apr 1 13:00:43.480: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 1 13:00:43.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1371' Apr 1 13:00:43.566: INFO: stderr: "" Apr 1 13:00:43.566: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 1 13:00:43.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1371' Apr 1 13:00:43.657: INFO: stderr: "" Apr 1 13:00:43.657: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 1 13:00:43.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1371' Apr 1 13:00:43.738: INFO: stderr: "" Apr 1 13:00:43.739: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 1 13:00:43.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1371' Apr 1 13:00:43.867: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:00:43.867: INFO: stdout: "pod \"pause\" force deleted\n" Apr 1 13:00:43.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1371' Apr 1 13:00:43.970: INFO: stderr: "No resources found.\n" Apr 1 13:00:43.970: INFO: stdout: "" Apr 1 13:00:43.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1371 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:00:44.055: INFO: stderr: "" Apr 1 13:00:44.055: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:00:44.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1371" for this suite. Apr 1 13:00:50.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:00:50.284: INFO: namespace kubectl-1371 deletion completed in 6.226165083s • [SLOW TEST:11.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:00:50.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 1 13:00:54.366: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-09e46644-fae2-46ec-a891-4a5115fcad94,GenerateName:,Namespace:events-7697,SelfLink:/api/v1/namespaces/events-7697/pods/send-events-09e46644-fae2-46ec-a891-4a5115fcad94,UID:f3bed308-578e-4c4b-9689-2cd7f66aae71,ResourceVersion:3030793,Generation:0,CreationTimestamp:2020-04-01 13:00:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 339556560,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vlm9q {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlm9q,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vlm9q true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e42be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e42c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:00:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:00:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:00:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:00:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.138,StartTime:2020-04-01 13:00:50 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-01 13:00:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://351be85802ed1c05312afd8e6f77cb90f407ffd5539d87bcd354cac9c57009ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 1 13:00:56.371: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 1 13:00:58.376: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:00:58.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7697" for this suite. Apr 1 13:01:36.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:01:36.485: INFO: namespace events-7697 deletion completed in 38.097504275s • [SLOW TEST:46.201 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:01:36.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-eac8486c-a94c-4575-bcb5-3150cbc57615 STEP: Creating a pod to test consume secrets Apr 1 13:01:36.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076" in namespace "projected-9989" to be "success or failure" Apr 1 13:01:36.559: INFO: Pod "pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076": Phase="Pending", Reason="", readiness=false. Elapsed: 3.74639ms Apr 1 13:01:38.563: INFO: Pod "pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007678344s Apr 1 13:01:40.567: INFO: Pod "pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012026434s STEP: Saw pod success Apr 1 13:01:40.567: INFO: Pod "pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076" satisfied condition "success or failure" Apr 1 13:01:40.570: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076 container projected-secret-volume-test: STEP: delete the pod Apr 1 13:01:40.590: INFO: Waiting for pod pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076 to disappear Apr 1 13:01:40.594: INFO: Pod pod-projected-secrets-71f35723-e46f-41e3-b125-6b9101bd2076 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:01:40.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9989" for this suite. Apr 1 13:01:46.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:01:46.692: INFO: namespace projected-9989 deletion completed in 6.094909289s • [SLOW TEST:10.206 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:01:46.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 1 13:01:46.770: INFO: Waiting up to 5m0s for pod "pod-c872ca13-9887-4047-963d-5669038aad4b" in namespace "emptydir-7305" to be "success or failure" Apr 1 13:01:46.775: INFO: Pod "pod-c872ca13-9887-4047-963d-5669038aad4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068403ms Apr 1 13:01:48.779: INFO: Pod "pod-c872ca13-9887-4047-963d-5669038aad4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008423639s Apr 1 13:01:50.784: INFO: Pod "pod-c872ca13-9887-4047-963d-5669038aad4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013417206s STEP: Saw pod success Apr 1 13:01:50.784: INFO: Pod "pod-c872ca13-9887-4047-963d-5669038aad4b" satisfied condition "success or failure" Apr 1 13:01:50.787: INFO: Trying to get logs from node iruya-worker2 pod pod-c872ca13-9887-4047-963d-5669038aad4b container test-container: STEP: delete the pod Apr 1 13:01:50.807: INFO: Waiting for pod pod-c872ca13-9887-4047-963d-5669038aad4b to disappear Apr 1 13:01:50.810: INFO: Pod pod-c872ca13-9887-4047-963d-5669038aad4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:01:50.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7305" for this suite. Apr 1 13:01:56.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:01:56.961: INFO: namespace emptydir-7305 deletion completed in 6.14793178s • [SLOW TEST:10.269 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:01:56.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:01:57.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5" in namespace "downward-api-8463" to be "success or failure" Apr 1 13:01:57.051: INFO: Pod "downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.225637ms Apr 1 13:01:59.056: INFO: Pod "downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007649138s Apr 1 13:02:01.060: INFO: Pod "downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011996924s STEP: Saw pod success Apr 1 13:02:01.060: INFO: Pod "downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5" satisfied condition "success or failure" Apr 1 13:02:01.063: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5 container client-container: STEP: delete the pod Apr 1 13:02:01.097: INFO: Waiting for pod downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5 to disappear Apr 1 13:02:01.112: INFO: Pod downwardapi-volume-33f92db9-2630-44bc-afde-becfb7e59bf5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:01.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8463" for this suite. Apr 1 13:02:07.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:02:07.210: INFO: namespace downward-api-8463 deletion completed in 6.095218331s • [SLOW TEST:10.249 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:02:07.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:02:07.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7" in namespace "projected-6843" to be "success or failure" Apr 1 13:02:07.351: INFO: Pod "downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 57.387947ms Apr 1 13:02:09.380: INFO: Pod "downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086317645s Apr 1 13:02:11.385: INFO: Pod "downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091291554s STEP: Saw pod success Apr 1 13:02:11.385: INFO: Pod "downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7" satisfied condition "success or failure" Apr 1 13:02:11.388: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7 container client-container: STEP: delete the pod Apr 1 13:02:11.508: INFO: Waiting for pod downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7 to disappear Apr 1 13:02:11.512: INFO: Pod downwardapi-volume-fa530297-ee30-4fda-9fde-ccc6b3a4c6b7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:11.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6843" for this suite. Apr 1 13:02:17.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:02:17.600: INFO: namespace projected-6843 deletion completed in 6.082685243s • [SLOW TEST:10.389 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:02:17.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 13:02:17.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7337' Apr 1 13:02:17.780: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 1 13:02:17.780: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 1 13:02:17.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7337' Apr 1 13:02:17.891: INFO: stderr: "" Apr 1 13:02:17.891: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:17.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7337" for this suite. Apr 1 13:02:23.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:02:23.999: INFO: namespace kubectl-7337 deletion completed in 6.104180933s • [SLOW TEST:6.398 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:02:23.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 13:02:27.206: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:27.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6477" for this suite. Apr 1 13:02:33.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:02:33.320: INFO: namespace container-runtime-6477 deletion completed in 6.093988666s • [SLOW TEST:9.321 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:02:33.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-4926c29a-6933-44da-8a72-449a34b82afd STEP: Creating a pod to test consume secrets Apr 1 13:02:33.437: INFO: Waiting up to 5m0s for pod "pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b" in namespace "secrets-1777" to be "success or failure" Apr 1 13:02:33.442: INFO: Pod "pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.50153ms Apr 1 13:02:35.446: INFO: Pod "pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00908624s Apr 1 13:02:37.450: INFO: Pod "pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013167493s STEP: Saw pod success Apr 1 13:02:37.450: INFO: Pod "pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b" satisfied condition "success or failure" Apr 1 13:02:37.452: INFO: Trying to get logs from node iruya-worker pod pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b container secret-volume-test: STEP: delete the pod Apr 1 13:02:37.483: INFO: Waiting for pod pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b to disappear Apr 1 13:02:37.496: INFO: Pod pod-secrets-cf18721d-60f5-42f8-9fbb-ed5a0a5d613b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:37.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1777" for this suite. Apr 1 13:02:43.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:02:43.616: INFO: namespace secrets-1777 deletion completed in 6.11628672s • [SLOW TEST:10.295 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:02:43.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 1 13:02:43.671: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 1 13:02:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:43.986: INFO: stderr: "" Apr 1 13:02:43.986: INFO: stdout: "service/redis-slave created\n" Apr 1 13:02:43.986: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 1 13:02:43.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:44.290: INFO: stderr: "" Apr 1 13:02:44.290: INFO: stdout: "service/redis-master created\n" Apr 1 13:02:44.291: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 1 13:02:44.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:44.572: INFO: stderr: "" Apr 1 13:02:44.572: INFO: stdout: "service/frontend created\n" Apr 1 13:02:44.572: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 1 13:02:44.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:44.823: INFO: stderr: "" Apr 1 13:02:44.823: INFO: stdout: "deployment.apps/frontend created\n" Apr 1 13:02:44.823: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 1 13:02:44.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:45.119: INFO: stderr: "" Apr 1 13:02:45.119: INFO: stdout: "deployment.apps/redis-master created\n" Apr 1 13:02:45.119: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 1 13:02:45.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1733' Apr 1 13:02:45.377: INFO: stderr: "" Apr 1 13:02:45.377: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 1 13:02:45.377: INFO: Waiting for all frontend pods to be Running. Apr 1 13:02:55.428: INFO: Waiting for frontend to serve content. Apr 1 13:02:55.445: INFO: Trying to add a new entry to the guestbook. Apr 1 13:02:55.462: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 1 13:02:55.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:55.648: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:55.648: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 1 13:02:55.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:55.808: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:55.808: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 1 13:02:55.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:55.923: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:55.923: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 1 13:02:55.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:56.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:56.032: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 1 13:02:56.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:56.128: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:56.128: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 1 13:02:56.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1733' Apr 1 13:02:56.235: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:02:56.235: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:02:56.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1733" for this suite. Apr 1 13:03:34.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:03:34.385: INFO: namespace kubectl-1733 deletion completed in 38.119484262s • [SLOW TEST:50.769 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:03:34.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 13:03:34.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8483' Apr 1 13:03:34.575: INFO: stderr: "" Apr 1 13:03:34.575: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 1 13:03:34.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-8483' Apr 1 13:03:41.881: INFO: stderr: "" Apr 1 13:03:41.881: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:03:41.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8483" for this suite. Apr 1 13:03:47.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:03:47.975: INFO: namespace kubectl-8483 deletion completed in 6.088472719s • [SLOW TEST:13.590 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:03:47.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:03:48.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79" in namespace "projected-6241" to be "success or failure" Apr 1 13:03:48.069: INFO: Pod "downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79": Phase="Pending", Reason="", readiness=false. Elapsed: 17.584531ms Apr 1 13:03:50.124: INFO: Pod "downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072423991s Apr 1 13:03:52.128: INFO: Pod "downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076431991s STEP: Saw pod success Apr 1 13:03:52.128: INFO: Pod "downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79" satisfied condition "success or failure" Apr 1 13:03:52.132: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79 container client-container: STEP: delete the pod Apr 1 13:03:52.168: INFO: Waiting for pod downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79 to disappear Apr 1 13:03:52.182: INFO: Pod downwardapi-volume-645c72f4-9cba-49b6-b549-375df4705a79 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:03:52.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6241" for this suite. Apr 1 13:03:58.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:03:58.281: INFO: namespace projected-6241 deletion completed in 6.095656401s • [SLOW TEST:10.305 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:03:58.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:03:58.340: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980" in namespace "projected-4185" to be "success or failure" Apr 1 13:03:58.344: INFO: Pod "downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980": Phase="Pending", Reason="", readiness=false. Elapsed: 3.271095ms Apr 1 13:04:00.348: INFO: Pod "downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007390437s Apr 1 13:04:02.352: INFO: Pod "downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012007569s STEP: Saw pod success Apr 1 13:04:02.352: INFO: Pod "downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980" satisfied condition "success or failure" Apr 1 13:04:02.355: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980 container client-container: STEP: delete the pod Apr 1 13:04:02.386: INFO: Waiting for pod downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980 to disappear Apr 1 13:04:02.398: INFO: Pod downwardapi-volume-b8f2111a-f9ae-4b09-b565-106b80583980 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:04:02.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4185" for this suite. Apr 1 13:04:08.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:04:08.491: INFO: namespace projected-4185 deletion completed in 6.088539909s • [SLOW TEST:10.210 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:04:08.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 1 13:04:08.544: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:04:08.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4247" for this suite. Apr 1 13:04:14.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:04:14.729: INFO: namespace kubectl-4247 deletion completed in 6.095408446s • [SLOW TEST:6.237 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:04:14.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-841b7da4-26d7-4591-9588-c04a7f65f200 in namespace container-probe-6177 Apr 1 13:04:18.818: INFO: Started pod busybox-841b7da4-26d7-4591-9588-c04a7f65f200 in namespace container-probe-6177 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 13:04:18.822: INFO: Initial restart count of pod busybox-841b7da4-26d7-4591-9588-c04a7f65f200 is 0 Apr 1 13:05:12.996: INFO: Restart count of pod container-probe-6177/busybox-841b7da4-26d7-4591-9588-c04a7f65f200 is now 1 (54.173665234s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:05:13.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6177" for this suite. Apr 1 13:05:19.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:05:19.129: INFO: namespace container-probe-6177 deletion completed in 6.111584966s • [SLOW TEST:64.400 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:05:19.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2d6362c6-7931-4932-8d82-897ad9ed5b54 STEP: Creating a pod to test consume configMaps Apr 1 13:05:19.212: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c" in namespace "configmap-8194" to be "success or failure" Apr 1 13:05:19.216: INFO: Pod "pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716937ms Apr 1 13:05:21.220: INFO: Pod "pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007424302s Apr 1 13:05:23.224: INFO: Pod "pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011342413s STEP: Saw pod success Apr 1 13:05:23.224: INFO: Pod "pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c" satisfied condition "success or failure" Apr 1 13:05:23.226: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c container configmap-volume-test: STEP: delete the pod Apr 1 13:05:23.266: INFO: Waiting for pod pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c to disappear Apr 1 13:05:23.342: INFO: Pod pod-configmaps-6d5467e0-a5e3-40b7-b3ff-8a915f7fd21c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:05:23.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8194" for this suite. Apr 1 13:05:29.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:05:29.469: INFO: namespace configmap-8194 deletion completed in 6.122536776s • [SLOW TEST:10.340 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:05:29.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 1 13:05:29.516: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 1 13:05:29.551: INFO: Waiting for terminating namespaces to be deleted... Apr 1 13:05:29.554: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 1 13:05:29.561: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.561: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 13:05:29.561: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.561: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 13:05:29.561: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 1 13:05:29.568: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.568: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 13:05:29.568: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.568: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 13:05:29.568: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.568: INFO: Container coredns ready: true, restart count 0 Apr 1 13:05:29.568: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 1 13:05:29.568: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 1 13:05:29.629: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 1 13:05:29.629: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 1 13:05:29.629: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 1 13:05:29.629: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 1 13:05:29.629: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 1 13:05:29.629: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270.1601b3e29a3a1034], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2817/filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270.1601b3e31de3b97f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270.1601b3e36384b0aa], Reason = [Created], Message = [Created container filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270] STEP: Considering event: Type = [Normal], Name = [filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270.1601b3e371372095], Reason = [Started], Message = [Started container filler-pod-1e81e2e9-653f-4c6d-be02-70532f556270] STEP: Considering event: Type = [Normal], Name = [filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72.1601b3e29a948d88], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2817/filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72.1601b3e2e89f2476], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72.1601b3e33be20c44], Reason = [Created], Message = [Created container filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72] STEP: Considering event: Type = [Normal], Name = [filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72.1601b3e34adb2811], Reason = [Started], Message = [Started container filler-pod-e51d58fd-d96f-4a47-8edf-8f5be465bc72] STEP: Considering event: Type = [Warning], Name = [additional-pod.1601b3e4013059e8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:05:36.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2817" for this suite. Apr 1 13:05:42.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:05:42.827: INFO: namespace sched-pred-2817 deletion completed in 6.076258277s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.357 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:05:42.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1747 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 1 13:05:42.881: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 1 13:06:08.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.170:8080/dial?request=hostName&protocol=http&host=10.244.1.149&port=8080&tries=1'] Namespace:pod-network-test-1747 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:06:08.982: INFO: >>> kubeConfig: /root/.kube/config I0401 13:06:09.008158 6 log.go:172] (0xc0007de210) (0xc002c1e6e0) Create stream I0401 13:06:09.008186 6 log.go:172] (0xc0007de210) (0xc002c1e6e0) Stream added, broadcasting: 1 I0401 13:06:09.010182 6 log.go:172] (0xc0007de210) Reply frame received for 1 I0401 13:06:09.010224 6 log.go:172] (0xc0007de210) (0xc002c1e780) Create stream I0401 13:06:09.010238 6 log.go:172] (0xc0007de210) (0xc002c1e780) Stream added, broadcasting: 3 I0401 13:06:09.011316 6 log.go:172] (0xc0007de210) Reply frame received for 3 I0401 13:06:09.011339 6 log.go:172] (0xc0007de210) (0xc002c1e820) Create stream I0401 13:06:09.011349 6 log.go:172] (0xc0007de210) (0xc002c1e820) Stream added, broadcasting: 5 I0401 13:06:09.012160 6 log.go:172] (0xc0007de210) Reply frame received for 5 I0401 13:06:09.104312 6 log.go:172] (0xc0007de210) Data frame received for 3 I0401 13:06:09.104355 6 log.go:172] (0xc002c1e780) (3) Data frame handling I0401 13:06:09.104377 6 log.go:172] (0xc002c1e780) (3) Data frame sent I0401 13:06:09.104771 6 log.go:172] (0xc0007de210) Data frame received for 3 I0401 13:06:09.104802 6 log.go:172] (0xc002c1e780) (3) Data frame handling I0401 13:06:09.105063 6 log.go:172] (0xc0007de210) Data frame received for 5 I0401 13:06:09.105108 6 log.go:172] (0xc002c1e820) (5) Data frame handling I0401 13:06:09.107139 6 log.go:172] (0xc0007de210) Data frame received for 1 I0401 13:06:09.107169 6 log.go:172] (0xc002c1e6e0) (1) Data frame handling I0401 13:06:09.107182 6 log.go:172] (0xc002c1e6e0) (1) Data frame sent I0401 13:06:09.107219 6 log.go:172] (0xc0007de210) (0xc002c1e6e0) Stream removed, broadcasting: 1 I0401 13:06:09.107247 6 log.go:172] (0xc0007de210) Go away received I0401 13:06:09.107323 6 log.go:172] (0xc0007de210) (0xc002c1e6e0) Stream removed, broadcasting: 1 I0401 13:06:09.107357 6 log.go:172] (0xc0007de210) (0xc002c1e780) Stream removed, broadcasting: 3 I0401 13:06:09.107379 6 log.go:172] (0xc0007de210) (0xc002c1e820) Stream removed, broadcasting: 5 Apr 1 13:06:09.107: INFO: Waiting for endpoints: map[] Apr 1 13:06:09.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.170:8080/dial?request=hostName&protocol=http&host=10.244.2.169&port=8080&tries=1'] Namespace:pod-network-test-1747 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:06:09.111: INFO: >>> kubeConfig: /root/.kube/config I0401 13:06:09.138361 6 log.go:172] (0xc0007469a0) (0xc0011e4dc0) Create stream I0401 13:06:09.138396 6 log.go:172] (0xc0007469a0) (0xc0011e4dc0) Stream added, broadcasting: 1 I0401 13:06:09.140710 6 log.go:172] (0xc0007469a0) Reply frame received for 1 I0401 13:06:09.140750 6 log.go:172] (0xc0007469a0) (0xc0018125a0) Create stream I0401 13:06:09.140763 6 log.go:172] (0xc0007469a0) (0xc0018125a0) Stream added, broadcasting: 3 I0401 13:06:09.141983 6 log.go:172] (0xc0007469a0) Reply frame received for 3 I0401 13:06:09.142044 6 log.go:172] (0xc0007469a0) (0xc0011e4e60) Create stream I0401 13:06:09.142061 6 log.go:172] (0xc0007469a0) (0xc0011e4e60) Stream added, broadcasting: 5 I0401 13:06:09.143039 6 log.go:172] (0xc0007469a0) Reply frame received for 5 I0401 13:06:09.218642 6 log.go:172] (0xc0007469a0) Data frame received for 3 I0401 13:06:09.218683 6 log.go:172] (0xc0018125a0) (3) Data frame handling I0401 13:06:09.218706 6 log.go:172] (0xc0018125a0) (3) Data frame sent I0401 13:06:09.219480 6 log.go:172] (0xc0007469a0) Data frame received for 3 I0401 13:06:09.219503 6 log.go:172] (0xc0018125a0) (3) Data frame handling I0401 13:06:09.219530 6 log.go:172] (0xc0007469a0) Data frame received for 5 I0401 13:06:09.219547 6 log.go:172] (0xc0011e4e60) (5) Data frame handling I0401 13:06:09.221621 6 log.go:172] (0xc0007469a0) Data frame received for 1 I0401 13:06:09.221653 6 log.go:172] (0xc0011e4dc0) (1) Data frame handling I0401 13:06:09.221671 6 log.go:172] (0xc0011e4dc0) (1) Data frame sent I0401 13:06:09.221696 6 log.go:172] (0xc0007469a0) (0xc0011e4dc0) Stream removed, broadcasting: 1 I0401 13:06:09.221811 6 log.go:172] (0xc0007469a0) (0xc0011e4dc0) Stream removed, broadcasting: 1 I0401 13:06:09.221835 6 log.go:172] (0xc0007469a0) (0xc0018125a0) Stream removed, broadcasting: 3 I0401 13:06:09.222047 6 log.go:172] (0xc0007469a0) (0xc0011e4e60) Stream removed, broadcasting: 5 Apr 1 13:06:09.222: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:06:09.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0401 13:06:09.222525 6 log.go:172] (0xc0007469a0) Go away received STEP: Destroying namespace "pod-network-test-1747" for this suite. Apr 1 13:06:31.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:06:31.317: INFO: namespace pod-network-test-1747 deletion completed in 22.091137202s • [SLOW TEST:48.490 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:06:31.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d in namespace container-probe-2824 Apr 1 13:06:35.413: INFO: Started pod liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d in namespace container-probe-2824 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 13:06:35.415: INFO: Initial restart count of pod liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is 0 Apr 1 13:06:53.502: INFO: Restart count of pod container-probe-2824/liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is now 1 (18.086841051s elapsed) Apr 1 13:07:11.543: INFO: Restart count of pod container-probe-2824/liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is now 2 (36.127621804s elapsed) Apr 1 13:07:31.636: INFO: Restart count of pod container-probe-2824/liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is now 3 (56.220650169s elapsed) Apr 1 13:07:53.709: INFO: Restart count of pod container-probe-2824/liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is now 4 (1m18.293580806s elapsed) Apr 1 13:09:03.931: INFO: Restart count of pod container-probe-2824/liveness-c8664f79-4f19-43b3-a937-1abc5ef3694d is now 5 (2m28.515477159s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:09:03.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2824" for this suite. Apr 1 13:09:10.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:09:10.130: INFO: namespace container-probe-2824 deletion completed in 6.126009118s • [SLOW TEST:158.813 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:09:10.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:09:10.192: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a" in namespace "projected-4647" to be "success or failure" Apr 1 13:09:10.196: INFO: Pod "downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.604914ms Apr 1 13:09:12.200: INFO: Pod "downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008041804s Apr 1 13:09:14.205: INFO: Pod "downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012569055s STEP: Saw pod success Apr 1 13:09:14.205: INFO: Pod "downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a" satisfied condition "success or failure" Apr 1 13:09:14.208: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a container client-container: STEP: delete the pod Apr 1 13:09:14.227: INFO: Waiting for pod downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a to disappear Apr 1 13:09:14.231: INFO: Pod downwardapi-volume-d0906fe6-8426-4d5c-be75-e093d6fd5a9a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:09:14.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4647" for this suite. Apr 1 13:09:20.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:09:20.336: INFO: namespace projected-4647 deletion completed in 6.101198062s • [SLOW TEST:10.206 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:09:20.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-h8rmd in namespace proxy-9061 I0401 13:09:20.453315 6 runners.go:180] Created replication controller with name: proxy-service-h8rmd, namespace: proxy-9061, replica count: 1 I0401 13:09:21.503755 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 13:09:22.503934 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 13:09:23.504107 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:24.504358 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:25.504605 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:26.504825 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:27.504994 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:28.505336 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:29.505548 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:30.505769 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0401 13:09:31.506036 6 runners.go:180] proxy-service-h8rmd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 1 13:09:31.509: INFO: setup took 11.10770844s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 1 13:09:31.518: INFO: (0) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 8.504269ms) Apr 1 13:09:31.518: INFO: (0) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 9.017876ms) Apr 1 13:09:31.519: INFO: (0) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 9.0499ms) Apr 1 13:09:31.519: INFO: (0) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 8.991857ms) Apr 1 13:09:31.521: INFO: (0) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 11.507009ms) Apr 1 13:09:31.521: INFO: (0) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 11.686628ms) Apr 1 13:09:31.521: INFO: (0) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 11.810885ms) Apr 1 13:09:31.521: INFO: (0) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 11.682268ms) Apr 1 13:09:31.521: INFO: (0) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 11.704378ms) Apr 1 13:09:31.522: INFO: (0) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 12.481608ms) Apr 1 13:09:31.522: INFO: (0) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 12.364533ms) Apr 1 13:09:31.523: INFO: (0) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 13.283141ms) Apr 1 13:09:31.525: INFO: (0) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 15.391643ms) Apr 1 13:09:31.525: INFO: (0) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 4.167945ms) Apr 1 13:09:31.532: INFO: (1) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 4.322633ms) Apr 1 13:09:31.532: INFO: (1) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.357817ms) Apr 1 13:09:31.532: INFO: (1) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 4.396642ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.010144ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.164886ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.222308ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 5.210486ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.196844ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.172645ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.249827ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 5.357564ms) Apr 1 13:09:31.533: INFO: (1) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.419954ms) Apr 1 13:09:31.536: INFO: (2) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 2.550218ms) Apr 1 13:09:31.537: INFO: (2) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.952541ms) Apr 1 13:09:31.537: INFO: (2) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.310959ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 4.938547ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 5.039536ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 5.048473ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 5.175803ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.207956ms) Apr 1 13:09:31.538: INFO: (2) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 5.212006ms) Apr 1 13:09:31.539: INFO: (2) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.491331ms) Apr 1 13:09:31.539: INFO: (2) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.704343ms) Apr 1 13:09:31.539: INFO: (2) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 6.249141ms) Apr 1 13:09:31.539: INFO: (2) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 6.228419ms) Apr 1 13:09:31.539: INFO: (2) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 6.295107ms) Apr 1 13:09:31.543: INFO: (3) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 3.44495ms) Apr 1 13:09:31.543: INFO: (3) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 3.422342ms) Apr 1 13:09:31.543: INFO: (3) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.561189ms) Apr 1 13:09:31.543: INFO: (3) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 3.85536ms) Apr 1 13:09:31.544: INFO: (3) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 4.599774ms) Apr 1 13:09:31.545: INFO: (3) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.574709ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.896689ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.954482ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.986351ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 6.058333ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 6.063906ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 6.012482ms) Apr 1 13:09:31.546: INFO: (3) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 6.008585ms) Apr 1 13:09:31.548: INFO: (4) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 2.219738ms) Apr 1 13:09:31.548: INFO: (4) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 2.679062ms) Apr 1 13:09:31.549: INFO: (4) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.717055ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.688786ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 3.737845ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.272854ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.385173ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.301482ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.422988ms) Apr 1 13:09:31.550: INFO: (4) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 5.104966ms) Apr 1 13:09:31.551: INFO: (4) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.176699ms) Apr 1 13:09:31.556: INFO: (5) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 4.676769ms) Apr 1 13:09:31.556: INFO: (5) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.757807ms) Apr 1 13:09:31.556: INFO: (5) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.959497ms) Apr 1 13:09:31.556: INFO: (5) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 5.173933ms) Apr 1 13:09:31.556: INFO: (5) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 5.335821ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.914752ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.910932ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.950454ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.878042ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.876195ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 5.921751ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.916581ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 5.933549ms) Apr 1 13:09:31.557: INFO: (5) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 6.095353ms) Apr 1 13:09:31.561: INFO: (6) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.480365ms) Apr 1 13:09:31.561: INFO: (6) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 3.969654ms) Apr 1 13:09:31.561: INFO: (6) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.053386ms) Apr 1 13:09:31.562: INFO: (6) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 4.194627ms) Apr 1 13:09:31.562: INFO: (6) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 4.242018ms) Apr 1 13:09:31.562: INFO: (6) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.237754ms) Apr 1 13:09:31.562: INFO: (6) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.294858ms) Apr 1 13:09:31.562: INFO: (6) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.325123ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.804066ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.844542ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.901762ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.824709ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.929855ms) Apr 1 13:09:31.563: INFO: (6) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.954885ms) Apr 1 13:09:31.567: INFO: (7) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.205919ms) Apr 1 13:09:31.567: INFO: (7) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.232951ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.632047ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 4.586249ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.641188ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.714591ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.633269ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.661719ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.672636ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.68241ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 4.732123ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.738337ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.799229ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.834685ms) Apr 1 13:09:31.568: INFO: (7) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.98056ms) Apr 1 13:09:31.571: INFO: (8) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 2.3183ms) Apr 1 13:09:31.571: INFO: (8) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 2.68175ms) Apr 1 13:09:31.571: INFO: (8) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 3.017916ms) Apr 1 13:09:31.571: INFO: (8) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 3.201571ms) Apr 1 13:09:31.572: INFO: (8) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 3.201995ms) Apr 1 13:09:31.572: INFO: (8) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 3.292119ms) Apr 1 13:09:31.573: INFO: (8) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.445456ms) Apr 1 13:09:31.573: INFO: (8) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.420084ms) Apr 1 13:09:31.573: INFO: (8) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.546319ms) Apr 1 13:09:31.573: INFO: (8) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.540737ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 2.806999ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 2.851732ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 2.905346ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.07375ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.138052ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 3.175929ms) Apr 1 13:09:31.576: INFO: (9) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 3.144491ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 3.909824ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 3.963229ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.019018ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 3.959611ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.080698ms) Apr 1 13:09:31.577: INFO: (9) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.157912ms) Apr 1 13:09:31.580: INFO: (10) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 2.415046ms) Apr 1 13:09:31.580: INFO: (10) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 2.484793ms) Apr 1 13:09:31.580: INFO: (10) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 5.148489ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 5.099834ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 5.123716ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.134404ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.202364ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 5.200975ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.188392ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.508559ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.706581ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.717822ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.701733ms) Apr 1 13:09:31.583: INFO: (10) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.795424ms) Apr 1 13:09:31.587: INFO: (11) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 3.638941ms) Apr 1 13:09:31.587: INFO: (11) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.728037ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: ... (200; 4.736837ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.881852ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.957749ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 4.904206ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.971177ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.963816ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.085621ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.191318ms) Apr 1 13:09:31.588: INFO: (11) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 5.191713ms) Apr 1 13:09:31.589: INFO: (11) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.494551ms) Apr 1 13:09:31.592: INFO: (12) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 3.044394ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.592921ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.583529ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 4.712138ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.682199ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.674027ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.715578ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.752092ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 4.871212ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.868793ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.911919ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.865273ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.835363ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.804862ms) Apr 1 13:09:31.594: INFO: (12) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.903492ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.974587ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.152521ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.31027ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.432113ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.393016ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.473306ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.395378ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.440271ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.429783ms) Apr 1 13:09:31.598: INFO: (13) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: ... (200; 4.637267ms) Apr 1 13:09:31.599: INFO: (13) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 4.655159ms) Apr 1 13:09:31.599: INFO: (13) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.703921ms) Apr 1 13:09:31.599: INFO: (13) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 4.636489ms) Apr 1 13:09:31.599: INFO: (13) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.795315ms) Apr 1 13:09:31.599: INFO: (13) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.905961ms) Apr 1 13:09:31.602: INFO: (14) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 3.222066ms) Apr 1 13:09:31.602: INFO: (14) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: ... (200; 3.617466ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 3.569353ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.680728ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.654141ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.883729ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.864586ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.193132ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.325339ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.336718ms) Apr 1 13:09:31.603: INFO: (14) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.485281ms) Apr 1 13:09:31.604: INFO: (14) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.750756ms) Apr 1 13:09:31.604: INFO: (14) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.801119ms) Apr 1 13:09:31.606: INFO: (15) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 2.332918ms) Apr 1 13:09:31.607: INFO: (15) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.411497ms) Apr 1 13:09:31.607: INFO: (15) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.495647ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.688949ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.72958ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.710624ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.908241ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 4.778784ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 4.968507ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: ... (200; 4.905365ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 5.287026ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 5.491209ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.474191ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 5.607301ms) Apr 1 13:09:31.609: INFO: (15) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 5.546531ms) Apr 1 13:09:31.612: INFO: (16) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 2.760511ms) Apr 1 13:09:31.612: INFO: (16) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 2.909989ms) Apr 1 13:09:31.612: INFO: (16) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 2.928349ms) Apr 1 13:09:31.612: INFO: (16) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 3.033935ms) Apr 1 13:09:31.612: INFO: (16) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 3.155917ms) Apr 1 13:09:31.613: INFO: (16) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 3.650276ms) Apr 1 13:09:31.613: INFO: (16) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 3.853937ms) Apr 1 13:09:31.613: INFO: (16) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.131483ms) Apr 1 13:09:31.614: INFO: (16) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.109982ms) Apr 1 13:09:31.614: INFO: (16) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.455614ms) Apr 1 13:09:31.614: INFO: (16) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.459585ms) Apr 1 13:09:31.614: INFO: (16) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.562428ms) Apr 1 13:09:31.617: INFO: (17) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.056436ms) Apr 1 13:09:31.617: INFO: (17) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 3.144643ms) Apr 1 13:09:31.617: INFO: (17) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.166506ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.784221ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 3.766151ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 3.86565ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.812581ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 3.85498ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.829113ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 3.803195ms) Apr 1 13:09:31.618: INFO: (17) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: ... (200; 2.897137ms) Apr 1 13:09:31.622: INFO: (18) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm/proxy/: test (200; 3.225493ms) Apr 1 13:09:31.622: INFO: (18) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.208084ms) Apr 1 13:09:31.622: INFO: (18) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 3.282213ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 3.77083ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 3.698767ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test<... (200; 3.908142ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.137417ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 4.16042ms) Apr 1 13:09:31.623: INFO: (18) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.254966ms) Apr 1 13:09:31.624: INFO: (18) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.623239ms) Apr 1 13:09:31.627: INFO: (19) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:443/proxy/: test (200; 3.643833ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 4.029037ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname2/proxy/: bar (200; 4.361868ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:1080/proxy/: ... (200; 4.357115ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.393329ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname2/proxy/: bar (200; 4.385607ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:462/proxy/: tls qux (200; 4.751762ms) Apr 1 13:09:31.628: INFO: (19) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname1/proxy/: tls baz (200; 4.697286ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/services/https:proxy-service-h8rmd:tlsportname2/proxy/: tls qux (200; 4.794037ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/pods/proxy-service-h8rmd-pnscm:1080/proxy/: test<... (200; 4.842085ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/services/proxy-service-h8rmd:portname1/proxy/: foo (200; 4.889796ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:162/proxy/: bar (200; 4.928029ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/services/http:proxy-service-h8rmd:portname1/proxy/: foo (200; 5.040142ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/pods/https:proxy-service-h8rmd-pnscm:460/proxy/: tls baz (200; 5.134581ms) Apr 1 13:09:31.629: INFO: (19) /api/v1/namespaces/proxy-9061/pods/http:proxy-service-h8rmd-pnscm:160/proxy/: foo (200; 5.255248ms) STEP: deleting ReplicationController proxy-service-h8rmd in namespace proxy-9061, will wait for the garbage collector to delete the pods Apr 1 13:09:31.687: INFO: Deleting ReplicationController proxy-service-h8rmd took: 6.336386ms Apr 1 13:09:31.987: INFO: Terminating ReplicationController proxy-service-h8rmd pods took: 300.229027ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:09:34.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9061" for this suite. Apr 1 13:09:40.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:09:40.682: INFO: namespace proxy-9061 deletion completed in 6.089808264s • [SLOW TEST:20.346 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:09:40.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 1 13:09:40.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1391' Apr 1 13:09:43.712: INFO: stderr: "" Apr 1 13:09:43.712: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 1 13:09:44.716: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:09:44.716: INFO: Found 0 / 1 Apr 1 13:09:45.716: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:09:45.716: INFO: Found 0 / 1 Apr 1 13:09:46.717: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:09:46.717: INFO: Found 1 / 1 Apr 1 13:09:46.717: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 1 13:09:46.720: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:09:46.720: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 1 13:09:46.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391' Apr 1 13:09:46.824: INFO: stderr: "" Apr 1 13:09:46.824: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Apr 13:09:46.354 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Apr 13:09:46.354 # Server started, Redis version 3.2.12\n1:M 01 Apr 13:09:46.354 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Apr 13:09:46.354 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 1 13:09:46.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391 --tail=1' Apr 1 13:09:46.944: INFO: stderr: "" Apr 1 13:09:46.944: INFO: stdout: "1:M 01 Apr 13:09:46.354 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 1 13:09:46.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391 --limit-bytes=1' Apr 1 13:09:47.052: INFO: stderr: "" Apr 1 13:09:47.052: INFO: stdout: " " STEP: exposing timestamps Apr 1 13:09:47.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391 --tail=1 --timestamps' Apr 1 13:09:47.181: INFO: stderr: "" Apr 1 13:09:47.181: INFO: stdout: "2020-04-01T13:09:46.354489193Z 1:M 01 Apr 13:09:46.354 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 1 13:09:49.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391 --since=1s' Apr 1 13:09:49.772: INFO: stderr: "" Apr 1 13:09:49.772: INFO: stdout: "" Apr 1 13:09:49.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-p2pwk redis-master --namespace=kubectl-1391 --since=24h' Apr 1 13:09:49.869: INFO: stderr: "" Apr 1 13:09:49.869: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Apr 13:09:46.354 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Apr 13:09:46.354 # Server started, Redis version 3.2.12\n1:M 01 Apr 13:09:46.354 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Apr 13:09:46.354 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 1 13:09:49.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1391' Apr 1 13:09:49.960: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:09:49.960: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 1 13:09:49.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1391' Apr 1 13:09:50.071: INFO: stderr: "No resources found.\n" Apr 1 13:09:50.071: INFO: stdout: "" Apr 1 13:09:50.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1391 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:09:50.226: INFO: stderr: "" Apr 1 13:09:50.226: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:09:50.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1391" for this suite. Apr 1 13:09:56.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:09:56.322: INFO: namespace kubectl-1391 deletion completed in 6.092340981s • [SLOW TEST:15.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:09:56.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 1 13:10:00.957: INFO: Successfully updated pod "annotationupdate4df4b804-e4ad-4fe1-addc-f5bddf6fc890" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:10:03.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1965" for this suite. Apr 1 13:10:25.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:10:25.137: INFO: namespace downward-api-1965 deletion completed in 22.126503819s • [SLOW TEST:28.815 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:10:25.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 1 13:10:25.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8572' Apr 1 13:10:25.448: INFO: stderr: "" Apr 1 13:10:25.448: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 13:10:25.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:25.562: INFO: stderr: "" Apr 1 13:10:25.562: INFO: stdout: "update-demo-nautilus-4b9pj update-demo-nautilus-vgjrg " Apr 1 13:10:25.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b9pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:25.640: INFO: stderr: "" Apr 1 13:10:25.640: INFO: stdout: "" Apr 1 13:10:25.640: INFO: update-demo-nautilus-4b9pj is created but not running Apr 1 13:10:30.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:30.737: INFO: stderr: "" Apr 1 13:10:30.737: INFO: stdout: "update-demo-nautilus-4b9pj update-demo-nautilus-vgjrg " Apr 1 13:10:30.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b9pj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:30.817: INFO: stderr: "" Apr 1 13:10:30.817: INFO: stdout: "true" Apr 1 13:10:30.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4b9pj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:30.906: INFO: stderr: "" Apr 1 13:10:30.906: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:10:30.906: INFO: validating pod update-demo-nautilus-4b9pj Apr 1 13:10:30.910: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:10:30.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:10:30.910: INFO: update-demo-nautilus-4b9pj is verified up and running Apr 1 13:10:30.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:30.992: INFO: stderr: "" Apr 1 13:10:30.993: INFO: stdout: "true" Apr 1 13:10:30.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:31.072: INFO: stderr: "" Apr 1 13:10:31.072: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:10:31.072: INFO: validating pod update-demo-nautilus-vgjrg Apr 1 13:10:31.076: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:10:31.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:10:31.076: INFO: update-demo-nautilus-vgjrg is verified up and running STEP: scaling down the replication controller Apr 1 13:10:31.079: INFO: scanned /root for discovery docs: Apr 1 13:10:31.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8572' Apr 1 13:10:32.210: INFO: stderr: "" Apr 1 13:10:32.210: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 13:10:32.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:32.310: INFO: stderr: "" Apr 1 13:10:32.310: INFO: stdout: "update-demo-nautilus-4b9pj update-demo-nautilus-vgjrg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 1 13:10:37.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:37.416: INFO: stderr: "" Apr 1 13:10:37.416: INFO: stdout: "update-demo-nautilus-4b9pj update-demo-nautilus-vgjrg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 1 13:10:42.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:42.511: INFO: stderr: "" Apr 1 13:10:42.511: INFO: stdout: "update-demo-nautilus-vgjrg " Apr 1 13:10:42.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:42.604: INFO: stderr: "" Apr 1 13:10:42.604: INFO: stdout: "true" Apr 1 13:10:42.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:42.691: INFO: stderr: "" Apr 1 13:10:42.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:10:42.691: INFO: validating pod update-demo-nautilus-vgjrg Apr 1 13:10:42.694: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:10:42.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:10:42.694: INFO: update-demo-nautilus-vgjrg is verified up and running STEP: scaling up the replication controller Apr 1 13:10:42.696: INFO: scanned /root for discovery docs: Apr 1 13:10:42.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8572' Apr 1 13:10:43.848: INFO: stderr: "" Apr 1 13:10:43.848: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 13:10:43.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:43.945: INFO: stderr: "" Apr 1 13:10:43.945: INFO: stdout: "update-demo-nautilus-6wdz4 update-demo-nautilus-vgjrg " Apr 1 13:10:43.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:44.031: INFO: stderr: "" Apr 1 13:10:44.031: INFO: stdout: "" Apr 1 13:10:44.031: INFO: update-demo-nautilus-6wdz4 is created but not running Apr 1 13:10:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8572' Apr 1 13:10:49.136: INFO: stderr: "" Apr 1 13:10:49.136: INFO: stdout: "update-demo-nautilus-6wdz4 update-demo-nautilus-vgjrg " Apr 1 13:10:49.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:49.238: INFO: stderr: "" Apr 1 13:10:49.238: INFO: stdout: "true" Apr 1 13:10:49.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:49.322: INFO: stderr: "" Apr 1 13:10:49.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:10:49.322: INFO: validating pod update-demo-nautilus-6wdz4 Apr 1 13:10:49.326: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:10:49.326: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:10:49.326: INFO: update-demo-nautilus-6wdz4 is verified up and running Apr 1 13:10:49.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:49.420: INFO: stderr: "" Apr 1 13:10:49.420: INFO: stdout: "true" Apr 1 13:10:49.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgjrg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8572' Apr 1 13:10:49.517: INFO: stderr: "" Apr 1 13:10:49.517: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:10:49.517: INFO: validating pod update-demo-nautilus-vgjrg Apr 1 13:10:49.520: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:10:49.520: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:10:49.520: INFO: update-demo-nautilus-vgjrg is verified up and running STEP: using delete to clean up resources Apr 1 13:10:49.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8572' Apr 1 13:10:49.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:10:49.607: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 1 13:10:49.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8572' Apr 1 13:10:49.701: INFO: stderr: "No resources found.\n" Apr 1 13:10:49.701: INFO: stdout: "" Apr 1 13:10:49.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8572 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:10:49.806: INFO: stderr: "" Apr 1 13:10:49.806: INFO: stdout: "update-demo-nautilus-6wdz4\nupdate-demo-nautilus-vgjrg\n" Apr 1 13:10:50.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8572' Apr 1 13:10:50.410: INFO: stderr: "No resources found.\n" Apr 1 13:10:50.410: INFO: stdout: "" Apr 1 13:10:50.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8572 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:10:50.501: INFO: stderr: "" Apr 1 13:10:50.501: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:10:50.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8572" for this suite. Apr 1 13:11:12.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:11:12.773: INFO: namespace kubectl-8572 deletion completed in 22.269164971s • [SLOW TEST:47.635 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:11:12.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2766 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-2766 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2766 Apr 1 13:11:12.893: INFO: Found 0 stateful pods, waiting for 1 Apr 1 13:11:22.899: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 1 13:11:22.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:11:23.154: INFO: stderr: "I0401 13:11:23.025284 1431 log.go:172] (0xc0001168f0) (0xc000580be0) Create stream\nI0401 13:11:23.025346 1431 log.go:172] (0xc0001168f0) (0xc000580be0) Stream added, broadcasting: 1\nI0401 13:11:23.027485 1431 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0401 13:11:23.027543 1431 log.go:172] (0xc0001168f0) (0xc0008f0000) Create stream\nI0401 13:11:23.027587 1431 log.go:172] (0xc0001168f0) (0xc0008f0000) Stream added, broadcasting: 3\nI0401 13:11:23.028440 1431 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0401 13:11:23.028471 1431 log.go:172] (0xc0001168f0) (0xc0008f00a0) Create stream\nI0401 13:11:23.028484 1431 log.go:172] (0xc0001168f0) (0xc0008f00a0) Stream added, broadcasting: 5\nI0401 13:11:23.029571 1431 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0401 13:11:23.114337 1431 log.go:172] (0xc0001168f0) Data frame received for 5\nI0401 13:11:23.114370 1431 log.go:172] (0xc0008f00a0) (5) Data frame handling\nI0401 13:11:23.114390 1431 log.go:172] (0xc0008f00a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:11:23.142603 1431 log.go:172] (0xc0001168f0) Data frame received for 3\nI0401 13:11:23.142638 1431 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0401 13:11:23.142655 1431 log.go:172] (0xc0008f0000) (3) Data frame sent\nI0401 13:11:23.142890 1431 log.go:172] (0xc0001168f0) Data frame received for 3\nI0401 13:11:23.142935 1431 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0401 13:11:23.142969 1431 log.go:172] (0xc0001168f0) Data frame received for 5\nI0401 13:11:23.142984 1431 log.go:172] (0xc0008f00a0) (5) Data frame handling\nI0401 13:11:23.149783 1431 log.go:172] (0xc0001168f0) Data frame received for 1\nI0401 13:11:23.149805 1431 log.go:172] (0xc000580be0) (1) Data frame handling\nI0401 13:11:23.149812 1431 log.go:172] (0xc000580be0) (1) Data frame sent\nI0401 13:11:23.149839 1431 log.go:172] (0xc0001168f0) (0xc000580be0) Stream removed, broadcasting: 1\nI0401 13:11:23.149851 1431 log.go:172] (0xc0001168f0) Go away received\nI0401 13:11:23.150440 1431 log.go:172] (0xc0001168f0) (0xc000580be0) Stream removed, broadcasting: 1\nI0401 13:11:23.150475 1431 log.go:172] (0xc0001168f0) (0xc0008f0000) Stream removed, broadcasting: 3\nI0401 13:11:23.150489 1431 log.go:172] (0xc0001168f0) (0xc0008f00a0) Stream removed, broadcasting: 5\n" Apr 1 13:11:23.154: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:11:23.154: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:11:23.158: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 1 13:11:33.162: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:11:33.162: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:11:33.180: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:33.180: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:33.181: INFO: Apr 1 13:11:33.181: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 1 13:11:34.186: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991842249s Apr 1 13:11:35.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986723262s Apr 1 13:11:36.340: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.837482329s Apr 1 13:11:37.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.832820179s Apr 1 13:11:38.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.827474129s Apr 1 13:11:39.354: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82302533s Apr 1 13:11:40.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.818044969s Apr 1 13:11:41.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.813384107s Apr 1 13:11:42.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 808.820027ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2766 Apr 1 13:11:43.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:11:43.624: INFO: stderr: "I0401 13:11:43.508893 1451 log.go:172] (0xc000882420) (0xc00036a6e0) Create stream\nI0401 13:11:43.508945 1451 log.go:172] (0xc000882420) (0xc00036a6e0) Stream added, broadcasting: 1\nI0401 13:11:43.510889 1451 log.go:172] (0xc000882420) Reply frame received for 1\nI0401 13:11:43.511075 1451 log.go:172] (0xc000882420) (0xc000984000) Create stream\nI0401 13:11:43.511102 1451 log.go:172] (0xc000882420) (0xc000984000) Stream added, broadcasting: 3\nI0401 13:11:43.512465 1451 log.go:172] (0xc000882420) Reply frame received for 3\nI0401 13:11:43.512510 1451 log.go:172] (0xc000882420) (0xc0005e4140) Create stream\nI0401 13:11:43.512523 1451 log.go:172] (0xc000882420) (0xc0005e4140) Stream added, broadcasting: 5\nI0401 13:11:43.513362 1451 log.go:172] (0xc000882420) Reply frame received for 5\nI0401 13:11:43.617624 1451 log.go:172] (0xc000882420) Data frame received for 3\nI0401 13:11:43.617650 1451 log.go:172] (0xc000984000) (3) Data frame handling\nI0401 13:11:43.617663 1451 log.go:172] (0xc000984000) (3) Data frame sent\nI0401 13:11:43.617670 1451 log.go:172] (0xc000882420) Data frame received for 3\nI0401 13:11:43.617677 1451 log.go:172] (0xc000984000) (3) Data frame handling\nI0401 13:11:43.618094 1451 log.go:172] (0xc000882420) Data frame received for 5\nI0401 13:11:43.618119 1451 log.go:172] (0xc0005e4140) (5) Data frame handling\nI0401 13:11:43.618136 1451 log.go:172] (0xc0005e4140) (5) Data frame sent\nI0401 13:11:43.618146 1451 log.go:172] (0xc000882420) Data frame received for 5\nI0401 13:11:43.618158 1451 log.go:172] (0xc0005e4140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:11:43.619916 1451 log.go:172] (0xc000882420) Data frame received for 1\nI0401 13:11:43.619932 1451 log.go:172] (0xc00036a6e0) (1) Data frame handling\nI0401 13:11:43.619939 1451 log.go:172] (0xc00036a6e0) (1) Data frame sent\nI0401 13:11:43.619952 1451 log.go:172] (0xc000882420) (0xc00036a6e0) Stream removed, broadcasting: 1\nI0401 13:11:43.620130 1451 log.go:172] (0xc000882420) Go away received\nI0401 13:11:43.620227 1451 log.go:172] (0xc000882420) (0xc00036a6e0) Stream removed, broadcasting: 1\nI0401 13:11:43.620242 1451 log.go:172] (0xc000882420) (0xc000984000) Stream removed, broadcasting: 3\nI0401 13:11:43.620249 1451 log.go:172] (0xc000882420) (0xc0005e4140) Stream removed, broadcasting: 5\n" Apr 1 13:11:43.624: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:11:43.624: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:11:43.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:11:43.817: INFO: stderr: "I0401 13:11:43.743740 1471 log.go:172] (0xc000117080) (0xc000662be0) Create stream\nI0401 13:11:43.743785 1471 log.go:172] (0xc000117080) (0xc000662be0) Stream added, broadcasting: 1\nI0401 13:11:43.746087 1471 log.go:172] (0xc000117080) Reply frame received for 1\nI0401 13:11:43.746119 1471 log.go:172] (0xc000117080) (0xc000662320) Create stream\nI0401 13:11:43.746126 1471 log.go:172] (0xc000117080) (0xc000662320) Stream added, broadcasting: 3\nI0401 13:11:43.746796 1471 log.go:172] (0xc000117080) Reply frame received for 3\nI0401 13:11:43.746821 1471 log.go:172] (0xc000117080) (0xc000010000) Create stream\nI0401 13:11:43.746829 1471 log.go:172] (0xc000117080) (0xc000010000) Stream added, broadcasting: 5\nI0401 13:11:43.747461 1471 log.go:172] (0xc000117080) Reply frame received for 5\nI0401 13:11:43.810915 1471 log.go:172] (0xc000117080) Data frame received for 5\nI0401 13:11:43.810962 1471 log.go:172] (0xc000010000) (5) Data frame handling\nI0401 13:11:43.810982 1471 log.go:172] (0xc000010000) (5) Data frame sent\nI0401 13:11:43.810998 1471 log.go:172] (0xc000117080) Data frame received for 5\nI0401 13:11:43.811011 1471 log.go:172] (0xc000010000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0401 13:11:43.811091 1471 log.go:172] (0xc000117080) Data frame received for 3\nI0401 13:11:43.811160 1471 log.go:172] (0xc000662320) (3) Data frame handling\nI0401 13:11:43.811200 1471 log.go:172] (0xc000662320) (3) Data frame sent\nI0401 13:11:43.811222 1471 log.go:172] (0xc000117080) Data frame received for 3\nI0401 13:11:43.811245 1471 log.go:172] (0xc000662320) (3) Data frame handling\nI0401 13:11:43.812747 1471 log.go:172] (0xc000117080) Data frame received for 1\nI0401 13:11:43.812780 1471 log.go:172] (0xc000662be0) (1) Data frame handling\nI0401 13:11:43.812803 1471 log.go:172] (0xc000662be0) (1) Data frame sent\nI0401 13:11:43.812824 1471 log.go:172] (0xc000117080) (0xc000662be0) Stream removed, broadcasting: 1\nI0401 13:11:43.812909 1471 log.go:172] (0xc000117080) Go away received\nI0401 13:11:43.813457 1471 log.go:172] (0xc000117080) (0xc000662be0) Stream removed, broadcasting: 1\nI0401 13:11:43.813482 1471 log.go:172] (0xc000117080) (0xc000662320) Stream removed, broadcasting: 3\nI0401 13:11:43.813495 1471 log.go:172] (0xc000117080) (0xc000010000) Stream removed, broadcasting: 5\n" Apr 1 13:11:43.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:11:43.817: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:11:43.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:11:44.011: INFO: stderr: "I0401 13:11:43.935761 1492 log.go:172] (0xc0001168f0) (0xc00040a820) Create stream\nI0401 13:11:43.935816 1492 log.go:172] (0xc0001168f0) (0xc00040a820) Stream added, broadcasting: 1\nI0401 13:11:43.938512 1492 log.go:172] (0xc0001168f0) Reply frame received for 1\nI0401 13:11:43.938535 1492 log.go:172] (0xc0001168f0) (0xc00040a8c0) Create stream\nI0401 13:11:43.938540 1492 log.go:172] (0xc0001168f0) (0xc00040a8c0) Stream added, broadcasting: 3\nI0401 13:11:43.939534 1492 log.go:172] (0xc0001168f0) Reply frame received for 3\nI0401 13:11:43.939624 1492 log.go:172] (0xc0001168f0) (0xc00088a000) Create stream\nI0401 13:11:43.939645 1492 log.go:172] (0xc0001168f0) (0xc00088a000) Stream added, broadcasting: 5\nI0401 13:11:43.940508 1492 log.go:172] (0xc0001168f0) Reply frame received for 5\nI0401 13:11:44.005566 1492 log.go:172] (0xc0001168f0) Data frame received for 5\nI0401 13:11:44.005613 1492 log.go:172] (0xc00088a000) (5) Data frame handling\nI0401 13:11:44.005644 1492 log.go:172] (0xc00088a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0401 13:11:44.005837 1492 log.go:172] (0xc0001168f0) Data frame received for 5\nI0401 13:11:44.005862 1492 log.go:172] (0xc00088a000) (5) Data frame handling\nI0401 13:11:44.005885 1492 log.go:172] (0xc0001168f0) Data frame received for 3\nI0401 13:11:44.005896 1492 log.go:172] (0xc00040a8c0) (3) Data frame handling\nI0401 13:11:44.005914 1492 log.go:172] (0xc00040a8c0) (3) Data frame sent\nI0401 13:11:44.005928 1492 log.go:172] (0xc0001168f0) Data frame received for 3\nI0401 13:11:44.005937 1492 log.go:172] (0xc00040a8c0) (3) Data frame handling\nI0401 13:11:44.007020 1492 log.go:172] (0xc0001168f0) Data frame received for 1\nI0401 13:11:44.007048 1492 log.go:172] (0xc00040a820) (1) Data frame handling\nI0401 13:11:44.007059 1492 log.go:172] (0xc00040a820) (1) Data frame sent\nI0401 13:11:44.007074 1492 log.go:172] (0xc0001168f0) (0xc00040a820) Stream removed, broadcasting: 1\nI0401 13:11:44.007090 1492 log.go:172] (0xc0001168f0) Go away received\nI0401 13:11:44.007615 1492 log.go:172] (0xc0001168f0) (0xc00040a820) Stream removed, broadcasting: 1\nI0401 13:11:44.007641 1492 log.go:172] (0xc0001168f0) (0xc00040a8c0) Stream removed, broadcasting: 3\nI0401 13:11:44.007653 1492 log.go:172] (0xc0001168f0) (0xc00088a000) Stream removed, broadcasting: 5\n" Apr 1 13:11:44.012: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:11:44.012: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:11:44.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:11:44.016: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:11:44.016: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 1 13:11:44.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:11:44.213: INFO: stderr: "I0401 13:11:44.147140 1513 log.go:172] (0xc000476630) (0xc0006a0960) Create stream\nI0401 13:11:44.147195 1513 log.go:172] (0xc000476630) (0xc0006a0960) Stream added, broadcasting: 1\nI0401 13:11:44.153579 1513 log.go:172] (0xc000476630) Reply frame received for 1\nI0401 13:11:44.153671 1513 log.go:172] (0xc000476630) (0xc0006a00a0) Create stream\nI0401 13:11:44.153692 1513 log.go:172] (0xc000476630) (0xc0006a00a0) Stream added, broadcasting: 3\nI0401 13:11:44.155252 1513 log.go:172] (0xc000476630) Reply frame received for 3\nI0401 13:11:44.155340 1513 log.go:172] (0xc000476630) (0xc00001c000) Create stream\nI0401 13:11:44.155359 1513 log.go:172] (0xc000476630) (0xc00001c000) Stream added, broadcasting: 5\nI0401 13:11:44.156266 1513 log.go:172] (0xc000476630) Reply frame received for 5\nI0401 13:11:44.206636 1513 log.go:172] (0xc000476630) Data frame received for 3\nI0401 13:11:44.206666 1513 log.go:172] (0xc0006a00a0) (3) Data frame handling\nI0401 13:11:44.206692 1513 log.go:172] (0xc000476630) Data frame received for 5\nI0401 13:11:44.206716 1513 log.go:172] (0xc00001c000) (5) Data frame handling\nI0401 13:11:44.206740 1513 log.go:172] (0xc00001c000) (5) Data frame sent\nI0401 13:11:44.206767 1513 log.go:172] (0xc000476630) Data frame received for 5\nI0401 13:11:44.206786 1513 log.go:172] (0xc00001c000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:11:44.206833 1513 log.go:172] (0xc0006a00a0) (3) Data frame sent\nI0401 13:11:44.206893 1513 log.go:172] (0xc000476630) Data frame received for 3\nI0401 13:11:44.206925 1513 log.go:172] (0xc0006a00a0) (3) Data frame handling\nI0401 13:11:44.208562 1513 log.go:172] (0xc000476630) Data frame received for 1\nI0401 13:11:44.208587 1513 log.go:172] (0xc0006a0960) (1) Data frame handling\nI0401 13:11:44.208617 1513 log.go:172] (0xc0006a0960) (1) Data frame sent\nI0401 13:11:44.208641 1513 log.go:172] (0xc000476630) (0xc0006a0960) Stream removed, broadcasting: 1\nI0401 13:11:44.208941 1513 log.go:172] (0xc000476630) Go away received\nI0401 13:11:44.209091 1513 log.go:172] (0xc000476630) (0xc0006a0960) Stream removed, broadcasting: 1\nI0401 13:11:44.209278 1513 log.go:172] (0xc000476630) (0xc0006a00a0) Stream removed, broadcasting: 3\nI0401 13:11:44.209303 1513 log.go:172] (0xc000476630) (0xc00001c000) Stream removed, broadcasting: 5\n" Apr 1 13:11:44.213: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:11:44.213: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:11:44.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:11:44.456: INFO: stderr: "I0401 13:11:44.338682 1533 log.go:172] (0xc000a7e370) (0xc0003aa6e0) Create stream\nI0401 13:11:44.338750 1533 log.go:172] (0xc000a7e370) (0xc0003aa6e0) Stream added, broadcasting: 1\nI0401 13:11:44.341983 1533 log.go:172] (0xc000a7e370) Reply frame received for 1\nI0401 13:11:44.342066 1533 log.go:172] (0xc000a7e370) (0xc000920000) Create stream\nI0401 13:11:44.342088 1533 log.go:172] (0xc000a7e370) (0xc000920000) Stream added, broadcasting: 3\nI0401 13:11:44.343237 1533 log.go:172] (0xc000a7e370) Reply frame received for 3\nI0401 13:11:44.343311 1533 log.go:172] (0xc000a7e370) (0xc000a62000) Create stream\nI0401 13:11:44.343370 1533 log.go:172] (0xc000a7e370) (0xc000a62000) Stream added, broadcasting: 5\nI0401 13:11:44.344666 1533 log.go:172] (0xc000a7e370) Reply frame received for 5\nI0401 13:11:44.414696 1533 log.go:172] (0xc000a7e370) Data frame received for 5\nI0401 13:11:44.414726 1533 log.go:172] (0xc000a62000) (5) Data frame handling\nI0401 13:11:44.414744 1533 log.go:172] (0xc000a62000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:11:44.449825 1533 log.go:172] (0xc000a7e370) Data frame received for 3\nI0401 13:11:44.449866 1533 log.go:172] (0xc000920000) (3) Data frame handling\nI0401 13:11:44.449878 1533 log.go:172] (0xc000920000) (3) Data frame sent\nI0401 13:11:44.449884 1533 log.go:172] (0xc000a7e370) Data frame received for 3\nI0401 13:11:44.449906 1533 log.go:172] (0xc000a7e370) Data frame received for 5\nI0401 13:11:44.449950 1533 log.go:172] (0xc000a62000) (5) Data frame handling\nI0401 13:11:44.449978 1533 log.go:172] (0xc000920000) (3) Data frame handling\nI0401 13:11:44.451518 1533 log.go:172] (0xc000a7e370) Data frame received for 1\nI0401 13:11:44.451551 1533 log.go:172] (0xc0003aa6e0) (1) Data frame handling\nI0401 13:11:44.451577 1533 log.go:172] (0xc0003aa6e0) (1) Data frame sent\nI0401 13:11:44.451601 1533 log.go:172] (0xc000a7e370) (0xc0003aa6e0) Stream removed, broadcasting: 1\nI0401 13:11:44.451622 1533 log.go:172] (0xc000a7e370) Go away received\nI0401 13:11:44.452053 1533 log.go:172] (0xc000a7e370) (0xc0003aa6e0) Stream removed, broadcasting: 1\nI0401 13:11:44.452081 1533 log.go:172] (0xc000a7e370) (0xc000920000) Stream removed, broadcasting: 3\nI0401 13:11:44.452091 1533 log.go:172] (0xc000a7e370) (0xc000a62000) Stream removed, broadcasting: 5\n" Apr 1 13:11:44.456: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:11:44.456: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:11:44.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2766 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:11:44.698: INFO: stderr: "I0401 13:11:44.586496 1554 log.go:172] (0xc000404580) (0xc0008e8780) Create stream\nI0401 13:11:44.586570 1554 log.go:172] (0xc000404580) (0xc0008e8780) Stream added, broadcasting: 1\nI0401 13:11:44.589905 1554 log.go:172] (0xc000404580) Reply frame received for 1\nI0401 13:11:44.589947 1554 log.go:172] (0xc000404580) (0xc0008e8000) Create stream\nI0401 13:11:44.589958 1554 log.go:172] (0xc000404580) (0xc0008e8000) Stream added, broadcasting: 3\nI0401 13:11:44.590948 1554 log.go:172] (0xc000404580) Reply frame received for 3\nI0401 13:11:44.590989 1554 log.go:172] (0xc000404580) (0xc0008e80a0) Create stream\nI0401 13:11:44.591004 1554 log.go:172] (0xc000404580) (0xc0008e80a0) Stream added, broadcasting: 5\nI0401 13:11:44.592048 1554 log.go:172] (0xc000404580) Reply frame received for 5\nI0401 13:11:44.657310 1554 log.go:172] (0xc000404580) Data frame received for 5\nI0401 13:11:44.657337 1554 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0401 13:11:44.657356 1554 log.go:172] (0xc0008e80a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:11:44.690862 1554 log.go:172] (0xc000404580) Data frame received for 3\nI0401 13:11:44.690892 1554 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0401 13:11:44.691048 1554 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0401 13:11:44.691309 1554 log.go:172] (0xc000404580) Data frame received for 5\nI0401 13:11:44.691345 1554 log.go:172] (0xc0008e80a0) (5) Data frame handling\nI0401 13:11:44.691582 1554 log.go:172] (0xc000404580) Data frame received for 3\nI0401 13:11:44.691616 1554 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0401 13:11:44.693096 1554 log.go:172] (0xc000404580) Data frame received for 1\nI0401 13:11:44.693275 1554 log.go:172] (0xc0008e8780) (1) Data frame handling\nI0401 13:11:44.693301 1554 log.go:172] (0xc0008e8780) (1) Data frame sent\nI0401 13:11:44.693329 1554 log.go:172] (0xc000404580) (0xc0008e8780) Stream removed, broadcasting: 1\nI0401 13:11:44.693354 1554 log.go:172] (0xc000404580) Go away received\nI0401 13:11:44.693776 1554 log.go:172] (0xc000404580) (0xc0008e8780) Stream removed, broadcasting: 1\nI0401 13:11:44.693799 1554 log.go:172] (0xc000404580) (0xc0008e8000) Stream removed, broadcasting: 3\nI0401 13:11:44.693811 1554 log.go:172] (0xc000404580) (0xc0008e80a0) Stream removed, broadcasting: 5\n" Apr 1 13:11:44.698: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:11:44.698: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:11:44.698: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:11:44.702: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 1 13:11:54.712: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:11:54.712: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:11:54.712: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:11:54.721: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:54.721: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:54.721: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:54.721: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:54.721: INFO: Apr 1 13:11:54.721: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:11:55.772: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:55.772: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:55.772: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:55.773: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:55.773: INFO: Apr 1 13:11:55.773: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:11:56.790: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:56.790: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:56.790: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:56.790: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:56.790: INFO: Apr 1 13:11:56.790: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:11:57.794: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:57.794: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:57.794: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:57.794: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:57.794: INFO: Apr 1 13:11:57.794: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:11:58.800: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:58.800: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:58.800: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:58.800: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:58.800: INFO: Apr 1 13:11:58.800: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:11:59.805: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:11:59.805: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:11:59.805: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:59.805: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:11:59.805: INFO: Apr 1 13:11:59.805: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:12:00.810: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:12:00.810: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:12:00.810: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:12:00.811: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:12:00.811: INFO: Apr 1 13:12:00.811: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:12:01.816: INFO: POD NODE PHASE GRACE CONDITIONS Apr 1 13:12:01.816: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:12 +0000 UTC }] Apr 1 13:12:01.816: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:45 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:12:01.816: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 13:11:33 +0000 UTC }] Apr 1 13:12:01.816: INFO: Apr 1 13:12:01.816: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 1 13:12:02.820: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.901290026s Apr 1 13:12:03.824: INFO: Verifying statefulset ss doesn't scale past 0 for another 897.731476ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2766 Apr 1 13:12:04.828: INFO: Scaling statefulset ss to 0 Apr 1 13:12:04.838: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 1 13:12:04.840: INFO: Deleting all statefulset in ns statefulset-2766 Apr 1 13:12:04.842: INFO: Scaling statefulset ss to 0 Apr 1 13:12:04.849: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:12:04.852: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:12:04.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2766" for this suite. Apr 1 13:12:10.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:12:10.996: INFO: namespace statefulset-2766 deletion completed in 6.127714966s • [SLOW TEST:58.223 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:12:10.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 1 13:12:11.045: INFO: Waiting up to 5m0s for pod "downward-api-f0154634-42b6-4e77-8ce8-7de323126667" in namespace "downward-api-40" to be "success or failure" Apr 1 13:12:11.049: INFO: Pod "downward-api-f0154634-42b6-4e77-8ce8-7de323126667": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097919ms Apr 1 13:12:13.059: INFO: Pod "downward-api-f0154634-42b6-4e77-8ce8-7de323126667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013848154s Apr 1 13:12:15.063: INFO: Pod "downward-api-f0154634-42b6-4e77-8ce8-7de323126667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017313241s STEP: Saw pod success Apr 1 13:12:15.063: INFO: Pod "downward-api-f0154634-42b6-4e77-8ce8-7de323126667" satisfied condition "success or failure" Apr 1 13:12:15.065: INFO: Trying to get logs from node iruya-worker pod downward-api-f0154634-42b6-4e77-8ce8-7de323126667 container dapi-container: STEP: delete the pod Apr 1 13:12:15.087: INFO: Waiting for pod downward-api-f0154634-42b6-4e77-8ce8-7de323126667 to disappear Apr 1 13:12:15.091: INFO: Pod downward-api-f0154634-42b6-4e77-8ce8-7de323126667 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:12:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-40" for this suite. Apr 1 13:12:21.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:12:21.185: INFO: namespace downward-api-40 deletion completed in 6.090553966s • [SLOW TEST:10.189 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:12:21.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-7a61cc6a-b7d4-40be-b58e-078e1daf5bec in namespace container-probe-4663 Apr 1 13:12:25.287: INFO: Started pod liveness-7a61cc6a-b7d4-40be-b58e-078e1daf5bec in namespace container-probe-4663 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 13:12:25.290: INFO: Initial restart count of pod liveness-7a61cc6a-b7d4-40be-b58e-078e1daf5bec is 0 Apr 1 13:12:41.405: INFO: Restart count of pod container-probe-4663/liveness-7a61cc6a-b7d4-40be-b58e-078e1daf5bec is now 1 (16.115066679s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:12:41.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4663" for this suite. Apr 1 13:12:47.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:12:47.586: INFO: namespace container-probe-4663 deletion completed in 6.11566209s • [SLOW TEST:26.400 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:12:47.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-6fd4b38f-e70d-448a-97e5-08a9a038d31f STEP: Creating a pod to test consume configMaps Apr 1 13:12:47.668: INFO: Waiting up to 5m0s for pod "pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe" in namespace "configmap-9095" to be "success or failure" Apr 1 13:12:47.718: INFO: Pod "pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 50.178206ms Apr 1 13:12:49.916: INFO: Pod "pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.248107045s Apr 1 13:12:51.921: INFO: Pod "pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.252732441s STEP: Saw pod success Apr 1 13:12:51.921: INFO: Pod "pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe" satisfied condition "success or failure" Apr 1 13:12:51.924: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe container configmap-volume-test: STEP: delete the pod Apr 1 13:12:51.971: INFO: Waiting for pod pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe to disappear Apr 1 13:12:51.984: INFO: Pod pod-configmaps-defc9c49-2c5c-4ccd-9c2f-a4222ca04ebe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:12:51.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9095" for this suite. Apr 1 13:12:58.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:12:58.081: INFO: namespace configmap-9095 deletion completed in 6.093553264s • [SLOW TEST:10.494 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:12:58.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c19f36c4-407c-48a8-9604-dcd9fa3e8d61 STEP: Creating a pod to test consume secrets Apr 1 13:12:58.271: INFO: Waiting up to 5m0s for pod "pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46" in namespace "secrets-9219" to be "success or failure" Apr 1 13:12:58.278: INFO: Pod "pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83239ms Apr 1 13:13:00.282: INFO: Pod "pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010984497s Apr 1 13:13:02.287: INFO: Pod "pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015275744s STEP: Saw pod success Apr 1 13:13:02.287: INFO: Pod "pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46" satisfied condition "success or failure" Apr 1 13:13:02.290: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46 container secret-volume-test: STEP: delete the pod Apr 1 13:13:02.326: INFO: Waiting for pod pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46 to disappear Apr 1 13:13:02.338: INFO: Pod pod-secrets-bcd8c4f8-2817-4053-bf71-53eed7ee5b46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:13:02.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9219" for this suite. Apr 1 13:13:08.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:13:08.443: INFO: namespace secrets-9219 deletion completed in 6.101636924s STEP: Destroying namespace "secret-namespace-2278" for this suite. Apr 1 13:13:14.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:13:14.533: INFO: namespace secret-namespace-2278 deletion completed in 6.090503781s • [SLOW TEST:16.452 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:13:14.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:13:14.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2524' Apr 1 13:13:14.832: INFO: stderr: "" Apr 1 13:13:14.832: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 1 13:13:14.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2524' Apr 1 13:13:15.085: INFO: stderr: "" Apr 1 13:13:15.085: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 1 13:13:16.150: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:13:16.150: INFO: Found 0 / 1 Apr 1 13:13:17.090: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:13:17.090: INFO: Found 0 / 1 Apr 1 13:13:18.090: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:13:18.090: INFO: Found 1 / 1 Apr 1 13:13:18.090: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 1 13:13:18.093: INFO: Selector matched 1 pods for map[app:redis] Apr 1 13:13:18.093: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 1 13:13:18.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-q6gjc --namespace=kubectl-2524' Apr 1 13:13:18.203: INFO: stderr: "" Apr 1 13:13:18.203: INFO: stdout: "Name: redis-master-q6gjc\nNamespace: kubectl-2524\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Wed, 01 Apr 2020 13:13:14 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.157\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://3399c43d463556abab5b3f9836692402b1d52d412ace7db5977dbf616acbe4d6\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 01 Apr 2020 13:13:17 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-w99nw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-w99nw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-w99nw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2524/redis-master-q6gjc to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Apr 1 13:13:18.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2524' Apr 1 13:13:18.309: INFO: stderr: "" Apr 1 13:13:18.309: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2524\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: redis-master-q6gjc\n" Apr 1 13:13:18.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2524' Apr 1 13:13:18.406: INFO: stderr: "" Apr 1 13:13:18.406: INFO: stdout: "Name: redis-master\nNamespace: kubectl-2524\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.96.68.173\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.157:6379\nSession Affinity: None\nEvents: \n" Apr 1 13:13:18.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 1 13:13:18.537: INFO: stderr: "" Apr 1 13:13:18.537: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 01 Apr 2020 13:13:02 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 01 Apr 2020 13:13:02 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 01 Apr 2020 13:13:02 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 01 Apr 2020 13:13:02 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 16d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 16d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 16d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 1 13:13:18.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2524' Apr 1 13:13:18.645: INFO: stderr: "" Apr 1 13:13:18.645: INFO: stdout: "Name: kubectl-2524\nLabels: e2e-framework=kubectl\n e2e-run=976f6ba7-2add-4c0c-886c-816693bc9320\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:13:18.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2524" for this suite. Apr 1 13:13:40.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:13:40.742: INFO: namespace kubectl-2524 deletion completed in 22.092261144s • [SLOW TEST:26.208 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:13:40.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 1 13:13:40.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-931,SelfLink:/api/v1/namespaces/watch-931/configmaps/e2e-watch-test-watch-closed,UID:aff0423c-6c3a-4bfe-a4b1-0309681ca9c9,ResourceVersion:3033486,Generation:0,CreationTimestamp:2020-04-01 13:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 1 13:13:40.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-931,SelfLink:/api/v1/namespaces/watch-931/configmaps/e2e-watch-test-watch-closed,UID:aff0423c-6c3a-4bfe-a4b1-0309681ca9c9,ResourceVersion:3033487,Generation:0,CreationTimestamp:2020-04-01 13:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 1 13:13:40.858: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-931,SelfLink:/api/v1/namespaces/watch-931/configmaps/e2e-watch-test-watch-closed,UID:aff0423c-6c3a-4bfe-a4b1-0309681ca9c9,ResourceVersion:3033488,Generation:0,CreationTimestamp:2020-04-01 13:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 1 13:13:40.858: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-931,SelfLink:/api/v1/namespaces/watch-931/configmaps/e2e-watch-test-watch-closed,UID:aff0423c-6c3a-4bfe-a4b1-0309681ca9c9,ResourceVersion:3033489,Generation:0,CreationTimestamp:2020-04-01 13:13:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:13:40.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-931" for this suite. Apr 1 13:13:46.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:13:46.955: INFO: namespace watch-931 deletion completed in 6.09271331s • [SLOW TEST:6.212 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:13:46.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-69b24b40-d494-487d-af41-f25058813498 STEP: Creating secret with name secret-projected-all-test-volume-b9ac2626-a028-4c98-9964-251e37c471ab STEP: Creating a pod to test Check all projections for projected volume plugin Apr 1 13:13:47.057: INFO: Waiting up to 5m0s for pod "projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9" in namespace "projected-6619" to be "success or failure" Apr 1 13:13:47.077: INFO: Pod "projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.218035ms Apr 1 13:13:49.081: INFO: Pod "projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023944362s Apr 1 13:13:51.084: INFO: Pod "projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027300546s STEP: Saw pod success Apr 1 13:13:51.084: INFO: Pod "projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9" satisfied condition "success or failure" Apr 1 13:13:51.087: INFO: Trying to get logs from node iruya-worker pod projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9 container projected-all-volume-test: STEP: delete the pod Apr 1 13:13:51.122: INFO: Waiting for pod projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9 to disappear Apr 1 13:13:51.134: INFO: Pod projected-volume-259ccf94-3210-416e-a7c9-ca65819eaba9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:13:51.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6619" for this suite. Apr 1 13:13:57.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:13:57.232: INFO: namespace projected-6619 deletion completed in 6.094232721s • [SLOW TEST:10.277 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:13:57.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 1 13:13:57.308: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix662360388/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:13:57.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9095" for this suite. Apr 1 13:14:03.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:14:03.480: INFO: namespace kubectl-9095 deletion completed in 6.095072593s • [SLOW TEST:6.247 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:14:03.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-qfsw STEP: Creating a pod to test atomic-volume-subpath Apr 1 13:14:03.550: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qfsw" in namespace "subpath-3888" to be "success or failure" Apr 1 13:14:03.555: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508109ms Apr 1 13:14:05.559: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00909295s Apr 1 13:14:07.564: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 4.013567874s Apr 1 13:14:09.568: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 6.018064745s Apr 1 13:14:11.573: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 8.022883528s Apr 1 13:14:13.577: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 10.027224426s Apr 1 13:14:15.582: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 12.031304766s Apr 1 13:14:17.586: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 14.036134428s Apr 1 13:14:19.591: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 16.040578602s Apr 1 13:14:21.595: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 18.044966249s Apr 1 13:14:23.599: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 20.04913739s Apr 1 13:14:25.603: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Running", Reason="", readiness=true. Elapsed: 22.052393996s Apr 1 13:14:27.607: INFO: Pod "pod-subpath-test-configmap-qfsw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056711876s STEP: Saw pod success Apr 1 13:14:27.607: INFO: Pod "pod-subpath-test-configmap-qfsw" satisfied condition "success or failure" Apr 1 13:14:27.610: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-qfsw container test-container-subpath-configmap-qfsw: STEP: delete the pod Apr 1 13:14:27.628: INFO: Waiting for pod pod-subpath-test-configmap-qfsw to disappear Apr 1 13:14:27.633: INFO: Pod pod-subpath-test-configmap-qfsw no longer exists STEP: Deleting pod pod-subpath-test-configmap-qfsw Apr 1 13:14:27.633: INFO: Deleting pod "pod-subpath-test-configmap-qfsw" in namespace "subpath-3888" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:14:27.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3888" for this suite. Apr 1 13:14:33.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:14:33.768: INFO: namespace subpath-3888 deletion completed in 6.130864107s • [SLOW TEST:30.287 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:14:33.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6840 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6840 STEP: Deleting pre-stop pod Apr 1 13:14:46.887: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:14:46.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6840" for this suite. Apr 1 13:15:24.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:15:25.008: INFO: namespace prestop-6840 deletion completed in 38.110880041s • [SLOW TEST:51.240 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:15:25.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-5873a27c-b7af-42b3-bdeb-d3ccef63b7b6 STEP: Creating configMap with name cm-test-opt-upd-a207e484-c823-498c-b9bb-ccad29be06f0 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5873a27c-b7af-42b3-bdeb-d3ccef63b7b6 STEP: Updating configmap cm-test-opt-upd-a207e484-c823-498c-b9bb-ccad29be06f0 STEP: Creating configMap with name cm-test-opt-create-4885cf9a-7403-46dd-97ef-29a018079ccc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:16:39.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-386" for this suite. Apr 1 13:17:01.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:17:01.610: INFO: namespace configmap-386 deletion completed in 22.097429466s • [SLOW TEST:96.601 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:17:01.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-9b3bf658-3ddf-4d90-be1b-d8299c77ed40 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:17:01.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7945" for this suite. Apr 1 13:17:07.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:17:07.761: INFO: namespace configmap-7945 deletion completed in 6.102219104s • [SLOW TEST:6.151 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:17:07.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 1 13:17:07.845: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:07.849: INFO: Number of nodes with available pods: 0 Apr 1 13:17:07.849: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:08.855: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:08.859: INFO: Number of nodes with available pods: 0 Apr 1 13:17:08.859: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:09.909: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:09.912: INFO: Number of nodes with available pods: 0 Apr 1 13:17:09.912: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:10.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:10.858: INFO: Number of nodes with available pods: 0 Apr 1 13:17:10.858: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:11.854: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:11.857: INFO: Number of nodes with available pods: 2 Apr 1 13:17:11.857: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 1 13:17:11.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:11.877: INFO: Number of nodes with available pods: 1 Apr 1 13:17:11.878: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:12.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:12.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:12.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:13.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:13.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:13.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:14.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:14.885: INFO: Number of nodes with available pods: 1 Apr 1 13:17:14.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:15.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:15.885: INFO: Number of nodes with available pods: 1 Apr 1 13:17:15.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:16.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:16.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:16.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:17.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:17.885: INFO: Number of nodes with available pods: 1 Apr 1 13:17:17.885: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:18.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:18.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:18.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:19.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:19.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:19.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:20.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:20.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:20.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:21.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:21.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:21.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:22.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:22.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:22.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:23.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:23.886: INFO: Number of nodes with available pods: 1 Apr 1 13:17:23.886: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:24.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:24.885: INFO: Number of nodes with available pods: 1 Apr 1 13:17:24.885: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:17:25.883: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 13:17:25.887: INFO: Number of nodes with available pods: 2 Apr 1 13:17:25.887: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4436, will wait for the garbage collector to delete the pods Apr 1 13:17:25.948: INFO: Deleting DaemonSet.extensions daemon-set took: 6.502764ms Apr 1 13:17:26.248: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.29978ms Apr 1 13:17:32.267: INFO: Number of nodes with available pods: 0 Apr 1 13:17:32.267: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 13:17:32.270: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4436/daemonsets","resourceVersion":"3034160"},"items":null} Apr 1 13:17:32.272: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4436/pods","resourceVersion":"3034160"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:17:32.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4436" for this suite. Apr 1 13:17:38.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:17:38.379: INFO: namespace daemonsets-4436 deletion completed in 6.095729551s • [SLOW TEST:30.617 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:17:38.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:17:38.462: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11" in namespace "projected-7135" to be "success or failure" Apr 1 13:17:38.467: INFO: Pod "downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11": Phase="Pending", Reason="", readiness=false. Elapsed: 5.311369ms Apr 1 13:17:40.470: INFO: Pod "downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008724097s Apr 1 13:17:42.475: INFO: Pod "downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013610131s STEP: Saw pod success Apr 1 13:17:42.475: INFO: Pod "downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11" satisfied condition "success or failure" Apr 1 13:17:42.479: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11 container client-container: STEP: delete the pod Apr 1 13:17:42.517: INFO: Waiting for pod downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11 to disappear Apr 1 13:17:42.542: INFO: Pod downwardapi-volume-4189e362-f701-4b84-8a5d-d8d7e9319a11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:17:42.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7135" for this suite. Apr 1 13:17:48.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:17:48.660: INFO: namespace projected-7135 deletion completed in 6.114693874s • [SLOW TEST:10.280 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:17:48.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 1 13:17:48.734: INFO: Waiting up to 5m0s for pod "downward-api-425bb345-36ef-4622-90c9-e52a00318e05" in namespace "downward-api-1879" to be "success or failure" Apr 1 13:17:48.737: INFO: Pod "downward-api-425bb345-36ef-4622-90c9-e52a00318e05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.685838ms Apr 1 13:17:50.740: INFO: Pod "downward-api-425bb345-36ef-4622-90c9-e52a00318e05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006206621s Apr 1 13:17:52.745: INFO: Pod "downward-api-425bb345-36ef-4622-90c9-e52a00318e05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010465018s STEP: Saw pod success Apr 1 13:17:52.745: INFO: Pod "downward-api-425bb345-36ef-4622-90c9-e52a00318e05" satisfied condition "success or failure" Apr 1 13:17:52.748: INFO: Trying to get logs from node iruya-worker pod downward-api-425bb345-36ef-4622-90c9-e52a00318e05 container dapi-container: STEP: delete the pod Apr 1 13:17:52.768: INFO: Waiting for pod downward-api-425bb345-36ef-4622-90c9-e52a00318e05 to disappear Apr 1 13:17:52.772: INFO: Pod downward-api-425bb345-36ef-4622-90c9-e52a00318e05 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:17:52.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1879" for this suite. Apr 1 13:17:58.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:17:58.866: INFO: namespace downward-api-1879 deletion completed in 6.090826872s • [SLOW TEST:10.206 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:17:58.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 13:17:58.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7051' Apr 1 13:17:59.065: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 1 13:17:59.065: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 1 13:17:59.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7051' Apr 1 13:17:59.239: INFO: stderr: "" Apr 1 13:17:59.239: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:17:59.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7051" for this suite. Apr 1 13:18:05.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:18:05.337: INFO: namespace kubectl-7051 deletion completed in 6.094836969s • [SLOW TEST:6.470 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:18:05.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 1 13:18:05.390: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 1 13:18:06.129: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 1 13:18:08.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721343886, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721343886, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721343886, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721343886, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 13:18:10.962: INFO: Waited 722.761605ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:18:11.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8336" for this suite. Apr 1 13:18:17.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:18:17.749: INFO: namespace aggregator-8336 deletion completed in 6.347973185s • [SLOW TEST:12.412 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:18:17.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-708/secret-test-1613d3bf-212e-4ef1-a1f1-1039cdc94843 STEP: Creating a pod to test consume secrets Apr 1 13:18:17.832: INFO: Waiting up to 5m0s for pod "pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c" in namespace "secrets-708" to be "success or failure" Apr 1 13:18:17.839: INFO: Pod "pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.819608ms Apr 1 13:18:19.854: INFO: Pod "pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021762563s Apr 1 13:18:21.858: INFO: Pod "pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025806496s STEP: Saw pod success Apr 1 13:18:21.858: INFO: Pod "pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c" satisfied condition "success or failure" Apr 1 13:18:21.861: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c container env-test: STEP: delete the pod Apr 1 13:18:21.886: INFO: Waiting for pod pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c to disappear Apr 1 13:18:21.899: INFO: Pod pod-configmaps-da5f7a8f-3e86-4a3e-b59c-dd658b045e9c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:18:21.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-708" for this suite. Apr 1 13:18:27.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:18:27.991: INFO: namespace secrets-708 deletion completed in 6.089682454s • [SLOW TEST:10.242 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:18:27.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 1 13:18:28.079: INFO: Waiting up to 5m0s for pod "client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab" in namespace "containers-278" to be "success or failure" Apr 1 13:18:28.112: INFO: Pod "client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab": Phase="Pending", Reason="", readiness=false. Elapsed: 32.687165ms Apr 1 13:18:30.116: INFO: Pod "client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037004314s Apr 1 13:18:32.120: INFO: Pod "client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041347373s STEP: Saw pod success Apr 1 13:18:32.120: INFO: Pod "client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab" satisfied condition "success or failure" Apr 1 13:18:32.124: INFO: Trying to get logs from node iruya-worker pod client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab container test-container: STEP: delete the pod Apr 1 13:18:32.144: INFO: Waiting for pod client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab to disappear Apr 1 13:18:32.148: INFO: Pod client-containers-a024b8f5-da3a-4a9f-8174-0a0879c496ab no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:18:32.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-278" for this suite. Apr 1 13:18:38.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:18:38.243: INFO: namespace containers-278 deletion completed in 6.091889924s • [SLOW TEST:10.251 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:18:38.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 13:18:38.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-414' Apr 1 13:18:38.392: INFO: stderr: "" Apr 1 13:18:38.393: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 1 13:18:43.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-414 -o json' Apr 1 13:18:43.549: INFO: stderr: "" Apr 1 13:18:43.549: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-01T13:18:38Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-414\",\n \"resourceVersion\": \"3034529\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-414/pods/e2e-test-nginx-pod\",\n \"uid\": \"80e231c6-cfa1-4b01-9aa4-759d5339af12\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c29vq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c29vq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c29vq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-01T13:18:38Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-01T13:18:41Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-01T13:18:41Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-01T13:18:38Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://2ce25355ce865e3093f429df0b29e29876f3054ab0abf968804980637057c0be\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-01T13:18:40Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.164\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-01T13:18:38Z\"\n }\n}\n" STEP: replace the image in the pod Apr 1 13:18:43.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-414' Apr 1 13:18:43.997: INFO: stderr: "" Apr 1 13:18:43.997: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 1 13:18:44.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-414' Apr 1 13:18:51.878: INFO: stderr: "" Apr 1 13:18:51.878: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:18:51.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-414" for this suite. Apr 1 13:18:57.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:18:57.974: INFO: namespace kubectl-414 deletion completed in 6.085258524s • [SLOW TEST:19.730 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:18:57.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ef6f1fce-ccb9-4f29-93d8-3e82649a6fbf STEP: Creating a pod to test consume secrets Apr 1 13:18:58.079: INFO: Waiting up to 5m0s for pod "pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793" in namespace "secrets-8292" to be "success or failure" Apr 1 13:18:58.085: INFO: Pod "pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793": Phase="Pending", Reason="", readiness=false. Elapsed: 5.520736ms Apr 1 13:19:00.089: INFO: Pod "pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009698043s Apr 1 13:19:02.093: INFO: Pod "pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013971368s STEP: Saw pod success Apr 1 13:19:02.093: INFO: Pod "pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793" satisfied condition "success or failure" Apr 1 13:19:02.097: INFO: Trying to get logs from node iruya-worker pod pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793 container secret-volume-test: STEP: delete the pod Apr 1 13:19:02.157: INFO: Waiting for pod pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793 to disappear Apr 1 13:19:02.163: INFO: Pod pod-secrets-04b53a66-d06b-4ea0-88de-9e1c0b46c793 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:19:02.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8292" for this suite. Apr 1 13:19:08.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:19:08.249: INFO: namespace secrets-8292 deletion completed in 6.083262906s • [SLOW TEST:10.275 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:19:08.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 1 13:19:08.307: INFO: Waiting up to 5m0s for pod "var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e" in namespace "var-expansion-7693" to be "success or failure" Apr 1 13:19:08.321: INFO: Pod "var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.298476ms Apr 1 13:19:10.325: INFO: Pod "var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017566282s Apr 1 13:19:12.329: INFO: Pod "var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021621317s STEP: Saw pod success Apr 1 13:19:12.329: INFO: Pod "var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e" satisfied condition "success or failure" Apr 1 13:19:12.332: INFO: Trying to get logs from node iruya-worker pod var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e container dapi-container: STEP: delete the pod Apr 1 13:19:12.368: INFO: Waiting for pod var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e to disappear Apr 1 13:19:12.382: INFO: Pod var-expansion-65570ee4-e8bb-4b07-bbdb-ab65ad587d4e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:19:12.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7693" for this suite. Apr 1 13:19:18.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:19:18.479: INFO: namespace var-expansion-7693 deletion completed in 6.093712438s • [SLOW TEST:10.230 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:19:18.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 1 13:19:18.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 1 13:19:18.676: INFO: stderr: "" Apr 1 13:19:18.676: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:19:18.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2842" for this suite. Apr 1 13:19:24.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:19:24.779: INFO: namespace kubectl-2842 deletion completed in 6.099124822s • [SLOW TEST:6.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:19:24.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b2e33c20-5df6-47eb-b380-e02f198ac60d in namespace container-probe-2164 Apr 1 13:19:28.889: INFO: Started pod test-webserver-b2e33c20-5df6-47eb-b380-e02f198ac60d in namespace container-probe-2164 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 13:19:28.892: INFO: Initial restart count of pod test-webserver-b2e33c20-5df6-47eb-b380-e02f198ac60d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:23:29.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2164" for this suite. Apr 1 13:23:35.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:23:35.578: INFO: namespace container-probe-2164 deletion completed in 6.095961913s • [SLOW TEST:250.798 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:23:35.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-0df9b913-cedb-4f72-af0b-2d7ccb584522 STEP: Creating a pod to test consume configMaps Apr 1 13:23:35.666: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08" in namespace "projected-561" to be "success or failure" Apr 1 13:23:35.691: INFO: Pod "pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08": Phase="Pending", Reason="", readiness=false. Elapsed: 25.119406ms Apr 1 13:23:37.695: INFO: Pod "pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029149165s Apr 1 13:23:39.700: INFO: Pod "pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033813348s STEP: Saw pod success Apr 1 13:23:39.700: INFO: Pod "pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08" satisfied condition "success or failure" Apr 1 13:23:39.703: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08 container projected-configmap-volume-test: STEP: delete the pod Apr 1 13:23:39.736: INFO: Waiting for pod pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08 to disappear Apr 1 13:23:39.768: INFO: Pod pod-projected-configmaps-97c2184e-bfb6-4913-81dd-6df4c4a2fc08 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:23:39.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-561" for this suite. Apr 1 13:23:45.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:23:45.864: INFO: namespace projected-561 deletion completed in 6.092621357s • [SLOW TEST:10.286 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:23:45.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-a90a23e3-86c7-4c10-bd7e-a40c6e59ad7b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a90a23e3-86c7-4c10-bd7e-a40c6e59ad7b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:23:52.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6729" for this suite. Apr 1 13:24:14.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:24:14.129: INFO: namespace configmap-6729 deletion completed in 22.101099716s • [SLOW TEST:28.265 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:24:14.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 1 13:24:22.255: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:22.277: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:24.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:24.281: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:26.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:26.302: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:28.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:28.281: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:30.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:30.282: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:32.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:32.281: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:34.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:34.302: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:36.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:36.282: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:38.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:38.282: INFO: Pod pod-with-poststart-exec-hook still exists Apr 1 13:24:40.277: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 1 13:24:40.286: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:24:40.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7153" for this suite. Apr 1 13:25:02.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:25:02.365: INFO: namespace container-lifecycle-hook-7153 deletion completed in 22.075698941s • [SLOW TEST:48.236 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:25:02.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c70174ba-66d4-4b31-8e2d-d17411513662 STEP: Creating a pod to test consume configMaps Apr 1 13:25:02.426: INFO: Waiting up to 5m0s for pod "pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972" in namespace "configmap-2038" to be "success or failure" Apr 1 13:25:02.429: INFO: Pod "pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972": Phase="Pending", Reason="", readiness=false. Elapsed: 3.556856ms Apr 1 13:25:04.432: INFO: Pod "pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006619034s Apr 1 13:25:06.435: INFO: Pod "pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009183815s STEP: Saw pod success Apr 1 13:25:06.435: INFO: Pod "pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972" satisfied condition "success or failure" Apr 1 13:25:06.437: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972 container configmap-volume-test: STEP: delete the pod Apr 1 13:25:06.469: INFO: Waiting for pod pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972 to disappear Apr 1 13:25:06.477: INFO: Pod pod-configmaps-be64099e-a872-49cf-8737-76d2009ad972 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:25:06.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2038" for this suite. Apr 1 13:25:12.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:25:12.564: INFO: namespace configmap-2038 deletion completed in 6.084679072s • [SLOW TEST:10.198 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:25:12.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 1 13:25:12.634: INFO: Waiting up to 5m0s for pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf" in namespace "emptydir-8745" to be "success or failure" Apr 1 13:25:12.650: INFO: Pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf": Phase="Pending", Reason="", readiness=false. Elapsed: 15.127778ms Apr 1 13:25:14.654: INFO: Pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019942593s Apr 1 13:25:16.658: INFO: Pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023726718s Apr 1 13:25:18.662: INFO: Pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027761046s STEP: Saw pod success Apr 1 13:25:18.662: INFO: Pod "pod-1a2186ae-cce0-468b-bfaf-565b525c6abf" satisfied condition "success or failure" Apr 1 13:25:18.665: INFO: Trying to get logs from node iruya-worker pod pod-1a2186ae-cce0-468b-bfaf-565b525c6abf container test-container: STEP: delete the pod Apr 1 13:25:18.683: INFO: Waiting for pod pod-1a2186ae-cce0-468b-bfaf-565b525c6abf to disappear Apr 1 13:25:18.687: INFO: Pod pod-1a2186ae-cce0-468b-bfaf-565b525c6abf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:25:18.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8745" for this suite. Apr 1 13:25:24.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:25:24.780: INFO: namespace emptydir-8745 deletion completed in 6.090295327s • [SLOW TEST:12.215 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:25:24.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 1 13:25:28.907: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 1 13:25:44.007: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:25:44.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7341" for this suite. Apr 1 13:25:50.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:25:50.118: INFO: namespace pods-7341 deletion completed in 6.102904981s • [SLOW TEST:25.337 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:25:50.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-ba5cb481-c18a-41d7-8cdb-6badb142e39f STEP: Creating configMap with name cm-test-opt-upd-a9b19b8c-56a5-4415-a535-6170afa78586 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ba5cb481-c18a-41d7-8cdb-6badb142e39f STEP: Updating configmap cm-test-opt-upd-a9b19b8c-56a5-4415-a535-6170afa78586 STEP: Creating configMap with name cm-test-opt-create-d66eed5d-9ba0-4fc4-82e0-2cdc8b5008d3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:27:00.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6033" for this suite. Apr 1 13:27:22.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:27:22.704: INFO: namespace projected-6033 deletion completed in 22.109152437s • [SLOW TEST:92.585 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:27:22.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 1 13:27:32.822: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:32.822: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:32.858549 6 log.go:172] (0xc000746a50) (0xc001d43680) Create stream I0401 13:27:32.858582 6 log.go:172] (0xc000746a50) (0xc001d43680) Stream added, broadcasting: 1 I0401 13:27:32.861100 6 log.go:172] (0xc000746a50) Reply frame received for 1 I0401 13:27:32.861297 6 log.go:172] (0xc000746a50) (0xc001813b80) Create stream I0401 13:27:32.861321 6 log.go:172] (0xc000746a50) (0xc001813b80) Stream added, broadcasting: 3 I0401 13:27:32.862400 6 log.go:172] (0xc000746a50) Reply frame received for 3 I0401 13:27:32.862447 6 log.go:172] (0xc000746a50) (0xc0030e8280) Create stream I0401 13:27:32.862472 6 log.go:172] (0xc000746a50) (0xc0030e8280) Stream added, broadcasting: 5 I0401 13:27:32.863637 6 log.go:172] (0xc000746a50) Reply frame received for 5 I0401 13:27:32.949567 6 log.go:172] (0xc000746a50) Data frame received for 3 I0401 13:27:32.949624 6 log.go:172] (0xc001813b80) (3) Data frame handling I0401 13:27:32.949643 6 log.go:172] (0xc001813b80) (3) Data frame sent I0401 13:27:32.949658 6 log.go:172] (0xc000746a50) Data frame received for 3 I0401 13:27:32.949670 6 log.go:172] (0xc001813b80) (3) Data frame handling I0401 13:27:32.949699 6 log.go:172] (0xc000746a50) Data frame received for 5 I0401 13:27:32.949712 6 log.go:172] (0xc0030e8280) (5) Data frame handling I0401 13:27:32.951246 6 log.go:172] (0xc000746a50) Data frame received for 1 I0401 13:27:32.951275 6 log.go:172] (0xc001d43680) (1) Data frame handling I0401 13:27:32.951296 6 log.go:172] (0xc001d43680) (1) Data frame sent I0401 13:27:32.951319 6 log.go:172] (0xc000746a50) (0xc001d43680) Stream removed, broadcasting: 1 I0401 13:27:32.951335 6 log.go:172] (0xc000746a50) Go away received I0401 13:27:32.951506 6 log.go:172] (0xc000746a50) (0xc001d43680) Stream removed, broadcasting: 1 I0401 13:27:32.951532 6 log.go:172] (0xc000746a50) (0xc001813b80) Stream removed, broadcasting: 3 I0401 13:27:32.951544 6 log.go:172] (0xc000746a50) (0xc0030e8280) Stream removed, broadcasting: 5 Apr 1 13:27:32.951: INFO: Exec stderr: "" Apr 1 13:27:32.951: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:32.951: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:32.987239 6 log.go:172] (0xc0010edce0) (0xc001eaf400) Create stream I0401 13:27:32.987270 6 log.go:172] (0xc0010edce0) (0xc001eaf400) Stream added, broadcasting: 1 I0401 13:27:32.989578 6 log.go:172] (0xc0010edce0) Reply frame received for 1 I0401 13:27:32.989613 6 log.go:172] (0xc0010edce0) (0xc00100a960) Create stream I0401 13:27:32.989626 6 log.go:172] (0xc0010edce0) (0xc00100a960) Stream added, broadcasting: 3 I0401 13:27:32.990497 6 log.go:172] (0xc0010edce0) Reply frame received for 3 I0401 13:27:32.990535 6 log.go:172] (0xc0010edce0) (0xc00100aa00) Create stream I0401 13:27:32.990548 6 log.go:172] (0xc0010edce0) (0xc00100aa00) Stream added, broadcasting: 5 I0401 13:27:32.991491 6 log.go:172] (0xc0010edce0) Reply frame received for 5 I0401 13:27:33.065457 6 log.go:172] (0xc0010edce0) Data frame received for 5 I0401 13:27:33.065503 6 log.go:172] (0xc00100aa00) (5) Data frame handling I0401 13:27:33.065538 6 log.go:172] (0xc0010edce0) Data frame received for 3 I0401 13:27:33.065558 6 log.go:172] (0xc00100a960) (3) Data frame handling I0401 13:27:33.065579 6 log.go:172] (0xc00100a960) (3) Data frame sent I0401 13:27:33.065593 6 log.go:172] (0xc0010edce0) Data frame received for 3 I0401 13:27:33.065605 6 log.go:172] (0xc00100a960) (3) Data frame handling I0401 13:27:33.067093 6 log.go:172] (0xc0010edce0) Data frame received for 1 I0401 13:27:33.067105 6 log.go:172] (0xc001eaf400) (1) Data frame handling I0401 13:27:33.067128 6 log.go:172] (0xc001eaf400) (1) Data frame sent I0401 13:27:33.067147 6 log.go:172] (0xc0010edce0) (0xc001eaf400) Stream removed, broadcasting: 1 I0401 13:27:33.067218 6 log.go:172] (0xc0010edce0) (0xc001eaf400) Stream removed, broadcasting: 1 I0401 13:27:33.067253 6 log.go:172] (0xc0010edce0) (0xc00100a960) Stream removed, broadcasting: 3 I0401 13:27:33.067369 6 log.go:172] (0xc0010edce0) (0xc00100aa00) Stream removed, broadcasting: 5 Apr 1 13:27:33.067: INFO: Exec stderr: "" Apr 1 13:27:33.067: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.067: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.095728 6 log.go:172] (0xc001afad10) (0xc001eaf720) Create stream I0401 13:27:33.095758 6 log.go:172] (0xc001afad10) (0xc001eaf720) Stream added, broadcasting: 1 I0401 13:27:33.104547 6 log.go:172] (0xc001afad10) Reply frame received for 1 I0401 13:27:33.104606 6 log.go:172] (0xc001afad10) (0xc00100abe0) Create stream I0401 13:27:33.104625 6 log.go:172] (0xc001afad10) (0xc00100abe0) Stream added, broadcasting: 3 I0401 13:27:33.105944 6 log.go:172] (0xc001afad10) Reply frame received for 3 I0401 13:27:33.105979 6 log.go:172] (0xc001afad10) (0xc001eaf7c0) Create stream I0401 13:27:33.105990 6 log.go:172] (0xc001afad10) (0xc001eaf7c0) Stream added, broadcasting: 5 I0401 13:27:33.107116 6 log.go:172] (0xc001afad10) Reply frame received for 5 I0401 13:27:33.167250 6 log.go:172] (0xc001afad10) Data frame received for 3 I0401 13:27:33.167317 6 log.go:172] (0xc00100abe0) (3) Data frame handling I0401 13:27:33.167348 6 log.go:172] (0xc00100abe0) (3) Data frame sent I0401 13:27:33.167368 6 log.go:172] (0xc001afad10) Data frame received for 3 I0401 13:27:33.167389 6 log.go:172] (0xc00100abe0) (3) Data frame handling I0401 13:27:33.167417 6 log.go:172] (0xc001afad10) Data frame received for 5 I0401 13:27:33.167462 6 log.go:172] (0xc001eaf7c0) (5) Data frame handling I0401 13:27:33.169445 6 log.go:172] (0xc001afad10) Data frame received for 1 I0401 13:27:33.169470 6 log.go:172] (0xc001eaf720) (1) Data frame handling I0401 13:27:33.169486 6 log.go:172] (0xc001eaf720) (1) Data frame sent I0401 13:27:33.169520 6 log.go:172] (0xc001afad10) (0xc001eaf720) Stream removed, broadcasting: 1 I0401 13:27:33.169598 6 log.go:172] (0xc001afad10) (0xc001eaf720) Stream removed, broadcasting: 1 I0401 13:27:33.169611 6 log.go:172] (0xc001afad10) (0xc00100abe0) Stream removed, broadcasting: 3 I0401 13:27:33.169622 6 log.go:172] (0xc001afad10) (0xc001eaf7c0) Stream removed, broadcasting: 5 Apr 1 13:27:33.169: INFO: Exec stderr: "" Apr 1 13:27:33.169: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.169: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.169715 6 log.go:172] (0xc001afad10) Go away received I0401 13:27:33.200468 6 log.go:172] (0xc001afbb80) (0xc001eafae0) Create stream I0401 13:27:33.200496 6 log.go:172] (0xc001afbb80) (0xc001eafae0) Stream added, broadcasting: 1 I0401 13:27:33.202774 6 log.go:172] (0xc001afbb80) Reply frame received for 1 I0401 13:27:33.202812 6 log.go:172] (0xc001afbb80) (0xc001d43720) Create stream I0401 13:27:33.202822 6 log.go:172] (0xc001afbb80) (0xc001d43720) Stream added, broadcasting: 3 I0401 13:27:33.203804 6 log.go:172] (0xc001afbb80) Reply frame received for 3 I0401 13:27:33.203832 6 log.go:172] (0xc001afbb80) (0xc001d437c0) Create stream I0401 13:27:33.203839 6 log.go:172] (0xc001afbb80) (0xc001d437c0) Stream added, broadcasting: 5 I0401 13:27:33.204872 6 log.go:172] (0xc001afbb80) Reply frame received for 5 I0401 13:27:33.257458 6 log.go:172] (0xc001afbb80) Data frame received for 3 I0401 13:27:33.257482 6 log.go:172] (0xc001d43720) (3) Data frame handling I0401 13:27:33.257494 6 log.go:172] (0xc001d43720) (3) Data frame sent I0401 13:27:33.257499 6 log.go:172] (0xc001afbb80) Data frame received for 3 I0401 13:27:33.257502 6 log.go:172] (0xc001d43720) (3) Data frame handling I0401 13:27:33.257765 6 log.go:172] (0xc001afbb80) Data frame received for 5 I0401 13:27:33.257779 6 log.go:172] (0xc001d437c0) (5) Data frame handling I0401 13:27:33.259110 6 log.go:172] (0xc001afbb80) Data frame received for 1 I0401 13:27:33.259153 6 log.go:172] (0xc001eafae0) (1) Data frame handling I0401 13:27:33.259178 6 log.go:172] (0xc001eafae0) (1) Data frame sent I0401 13:27:33.259201 6 log.go:172] (0xc001afbb80) (0xc001eafae0) Stream removed, broadcasting: 1 I0401 13:27:33.259326 6 log.go:172] (0xc001afbb80) (0xc001eafae0) Stream removed, broadcasting: 1 I0401 13:27:33.259357 6 log.go:172] (0xc001afbb80) (0xc001d43720) Stream removed, broadcasting: 3 I0401 13:27:33.259376 6 log.go:172] (0xc001afbb80) (0xc001d437c0) Stream removed, broadcasting: 5 Apr 1 13:27:33.259: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount I0401 13:27:33.259430 6 log.go:172] (0xc001afbb80) Go away received Apr 1 13:27:33.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.259: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.292917 6 log.go:172] (0xc000747d90) (0xc001d43ae0) Create stream I0401 13:27:33.292940 6 log.go:172] (0xc000747d90) (0xc001d43ae0) Stream added, broadcasting: 1 I0401 13:27:33.295288 6 log.go:172] (0xc000747d90) Reply frame received for 1 I0401 13:27:33.295323 6 log.go:172] (0xc000747d90) (0xc0030e8320) Create stream I0401 13:27:33.295335 6 log.go:172] (0xc000747d90) (0xc0030e8320) Stream added, broadcasting: 3 I0401 13:27:33.296392 6 log.go:172] (0xc000747d90) Reply frame received for 3 I0401 13:27:33.296449 6 log.go:172] (0xc000747d90) (0xc001813cc0) Create stream I0401 13:27:33.296471 6 log.go:172] (0xc000747d90) (0xc001813cc0) Stream added, broadcasting: 5 I0401 13:27:33.297677 6 log.go:172] (0xc000747d90) Reply frame received for 5 I0401 13:27:33.362798 6 log.go:172] (0xc000747d90) Data frame received for 3 I0401 13:27:33.362824 6 log.go:172] (0xc0030e8320) (3) Data frame handling I0401 13:27:33.362832 6 log.go:172] (0xc0030e8320) (3) Data frame sent I0401 13:27:33.362837 6 log.go:172] (0xc000747d90) Data frame received for 3 I0401 13:27:33.362841 6 log.go:172] (0xc0030e8320) (3) Data frame handling I0401 13:27:33.362890 6 log.go:172] (0xc000747d90) Data frame received for 5 I0401 13:27:33.362948 6 log.go:172] (0xc001813cc0) (5) Data frame handling I0401 13:27:33.364373 6 log.go:172] (0xc000747d90) Data frame received for 1 I0401 13:27:33.364398 6 log.go:172] (0xc001d43ae0) (1) Data frame handling I0401 13:27:33.364414 6 log.go:172] (0xc001d43ae0) (1) Data frame sent I0401 13:27:33.364436 6 log.go:172] (0xc000747d90) (0xc001d43ae0) Stream removed, broadcasting: 1 I0401 13:27:33.364453 6 log.go:172] (0xc000747d90) Go away received I0401 13:27:33.364609 6 log.go:172] (0xc000747d90) (0xc001d43ae0) Stream removed, broadcasting: 1 I0401 13:27:33.364651 6 log.go:172] (0xc000747d90) (0xc0030e8320) Stream removed, broadcasting: 3 I0401 13:27:33.364672 6 log.go:172] (0xc000747d90) (0xc001813cc0) Stream removed, broadcasting: 5 Apr 1 13:27:33.364: INFO: Exec stderr: "" Apr 1 13:27:33.364: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.364: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.401366 6 log.go:172] (0xc0010153f0) (0xc0030e8640) Create stream I0401 13:27:33.401392 6 log.go:172] (0xc0010153f0) (0xc0030e8640) Stream added, broadcasting: 1 I0401 13:27:33.404622 6 log.go:172] (0xc0010153f0) Reply frame received for 1 I0401 13:27:33.404660 6 log.go:172] (0xc0010153f0) (0xc0030e86e0) Create stream I0401 13:27:33.404684 6 log.go:172] (0xc0010153f0) (0xc0030e86e0) Stream added, broadcasting: 3 I0401 13:27:33.405804 6 log.go:172] (0xc0010153f0) Reply frame received for 3 I0401 13:27:33.405831 6 log.go:172] (0xc0010153f0) (0xc0030e8780) Create stream I0401 13:27:33.405838 6 log.go:172] (0xc0010153f0) (0xc0030e8780) Stream added, broadcasting: 5 I0401 13:27:33.406621 6 log.go:172] (0xc0010153f0) Reply frame received for 5 I0401 13:27:33.472561 6 log.go:172] (0xc0010153f0) Data frame received for 5 I0401 13:27:33.472594 6 log.go:172] (0xc0030e8780) (5) Data frame handling I0401 13:27:33.472619 6 log.go:172] (0xc0010153f0) Data frame received for 3 I0401 13:27:33.472661 6 log.go:172] (0xc0030e86e0) (3) Data frame handling I0401 13:27:33.472694 6 log.go:172] (0xc0030e86e0) (3) Data frame sent I0401 13:27:33.472712 6 log.go:172] (0xc0010153f0) Data frame received for 3 I0401 13:27:33.472727 6 log.go:172] (0xc0030e86e0) (3) Data frame handling I0401 13:27:33.474599 6 log.go:172] (0xc0010153f0) Data frame received for 1 I0401 13:27:33.474626 6 log.go:172] (0xc0030e8640) (1) Data frame handling I0401 13:27:33.474643 6 log.go:172] (0xc0030e8640) (1) Data frame sent I0401 13:27:33.474655 6 log.go:172] (0xc0010153f0) (0xc0030e8640) Stream removed, broadcasting: 1 I0401 13:27:33.474671 6 log.go:172] (0xc0010153f0) Go away received I0401 13:27:33.474827 6 log.go:172] (0xc0010153f0) (0xc0030e8640) Stream removed, broadcasting: 1 I0401 13:27:33.474846 6 log.go:172] (0xc0010153f0) (0xc0030e86e0) Stream removed, broadcasting: 3 I0401 13:27:33.474856 6 log.go:172] (0xc0010153f0) (0xc0030e8780) Stream removed, broadcasting: 5 Apr 1 13:27:33.474: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 1 13:27:33.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.474: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.510026 6 log.go:172] (0xc001d54840) (0xc001d43e00) Create stream I0401 13:27:33.510058 6 log.go:172] (0xc001d54840) (0xc001d43e00) Stream added, broadcasting: 1 I0401 13:27:33.512604 6 log.go:172] (0xc001d54840) Reply frame received for 1 I0401 13:27:33.512644 6 log.go:172] (0xc001d54840) (0xc001813d60) Create stream I0401 13:27:33.512652 6 log.go:172] (0xc001d54840) (0xc001813d60) Stream added, broadcasting: 3 I0401 13:27:33.513821 6 log.go:172] (0xc001d54840) Reply frame received for 3 I0401 13:27:33.513887 6 log.go:172] (0xc001d54840) (0xc00100adc0) Create stream I0401 13:27:33.513905 6 log.go:172] (0xc001d54840) (0xc00100adc0) Stream added, broadcasting: 5 I0401 13:27:33.514854 6 log.go:172] (0xc001d54840) Reply frame received for 5 I0401 13:27:33.580586 6 log.go:172] (0xc001d54840) Data frame received for 3 I0401 13:27:33.580629 6 log.go:172] (0xc001813d60) (3) Data frame handling I0401 13:27:33.580643 6 log.go:172] (0xc001813d60) (3) Data frame sent I0401 13:27:33.580665 6 log.go:172] (0xc001d54840) Data frame received for 3 I0401 13:27:33.580685 6 log.go:172] (0xc001813d60) (3) Data frame handling I0401 13:27:33.580736 6 log.go:172] (0xc001d54840) Data frame received for 5 I0401 13:27:33.580783 6 log.go:172] (0xc00100adc0) (5) Data frame handling I0401 13:27:33.582204 6 log.go:172] (0xc001d54840) Data frame received for 1 I0401 13:27:33.582244 6 log.go:172] (0xc001d43e00) (1) Data frame handling I0401 13:27:33.582277 6 log.go:172] (0xc001d43e00) (1) Data frame sent I0401 13:27:33.582311 6 log.go:172] (0xc001d54840) (0xc001d43e00) Stream removed, broadcasting: 1 I0401 13:27:33.582338 6 log.go:172] (0xc001d54840) Go away received I0401 13:27:33.582435 6 log.go:172] (0xc001d54840) (0xc001d43e00) Stream removed, broadcasting: 1 I0401 13:27:33.582471 6 log.go:172] (0xc001d54840) (0xc001813d60) Stream removed, broadcasting: 3 I0401 13:27:33.582490 6 log.go:172] (0xc001d54840) (0xc00100adc0) Stream removed, broadcasting: 5 Apr 1 13:27:33.582: INFO: Exec stderr: "" Apr 1 13:27:33.582: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.582: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.617860 6 log.go:172] (0xc0029e3e40) (0xc00155e280) Create stream I0401 13:27:33.617901 6 log.go:172] (0xc0029e3e40) (0xc00155e280) Stream added, broadcasting: 1 I0401 13:27:33.620533 6 log.go:172] (0xc0029e3e40) Reply frame received for 1 I0401 13:27:33.620589 6 log.go:172] (0xc0029e3e40) (0xc0030e8820) Create stream I0401 13:27:33.620614 6 log.go:172] (0xc0029e3e40) (0xc0030e8820) Stream added, broadcasting: 3 I0401 13:27:33.621714 6 log.go:172] (0xc0029e3e40) Reply frame received for 3 I0401 13:27:33.621749 6 log.go:172] (0xc0029e3e40) (0xc00100ae60) Create stream I0401 13:27:33.621761 6 log.go:172] (0xc0029e3e40) (0xc00100ae60) Stream added, broadcasting: 5 I0401 13:27:33.622695 6 log.go:172] (0xc0029e3e40) Reply frame received for 5 I0401 13:27:33.692813 6 log.go:172] (0xc0029e3e40) Data frame received for 5 I0401 13:27:33.692873 6 log.go:172] (0xc00100ae60) (5) Data frame handling I0401 13:27:33.692920 6 log.go:172] (0xc0029e3e40) Data frame received for 3 I0401 13:27:33.692951 6 log.go:172] (0xc0030e8820) (3) Data frame handling I0401 13:27:33.692982 6 log.go:172] (0xc0030e8820) (3) Data frame sent I0401 13:27:33.693001 6 log.go:172] (0xc0029e3e40) Data frame received for 3 I0401 13:27:33.693013 6 log.go:172] (0xc0030e8820) (3) Data frame handling I0401 13:27:33.694689 6 log.go:172] (0xc0029e3e40) Data frame received for 1 I0401 13:27:33.694716 6 log.go:172] (0xc00155e280) (1) Data frame handling I0401 13:27:33.694742 6 log.go:172] (0xc00155e280) (1) Data frame sent I0401 13:27:33.694757 6 log.go:172] (0xc0029e3e40) (0xc00155e280) Stream removed, broadcasting: 1 I0401 13:27:33.694865 6 log.go:172] (0xc0029e3e40) (0xc00155e280) Stream removed, broadcasting: 1 I0401 13:27:33.694876 6 log.go:172] (0xc0029e3e40) (0xc0030e8820) Stream removed, broadcasting: 3 I0401 13:27:33.694937 6 log.go:172] (0xc0029e3e40) Go away received I0401 13:27:33.694990 6 log.go:172] (0xc0029e3e40) (0xc00100ae60) Stream removed, broadcasting: 5 Apr 1 13:27:33.695: INFO: Exec stderr: "" Apr 1 13:27:33.695: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.695: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.727110 6 log.go:172] (0xc0022b88f0) (0xc0030e8b40) Create stream I0401 13:27:33.727132 6 log.go:172] (0xc0022b88f0) (0xc0030e8b40) Stream added, broadcasting: 1 I0401 13:27:33.729342 6 log.go:172] (0xc0022b88f0) Reply frame received for 1 I0401 13:27:33.729399 6 log.go:172] (0xc0022b88f0) (0xc00100b040) Create stream I0401 13:27:33.729421 6 log.go:172] (0xc0022b88f0) (0xc00100b040) Stream added, broadcasting: 3 I0401 13:27:33.735050 6 log.go:172] (0xc0022b88f0) Reply frame received for 3 I0401 13:27:33.735117 6 log.go:172] (0xc0022b88f0) (0xc001d43ea0) Create stream I0401 13:27:33.735136 6 log.go:172] (0xc0022b88f0) (0xc001d43ea0) Stream added, broadcasting: 5 I0401 13:27:33.736107 6 log.go:172] (0xc0022b88f0) Reply frame received for 5 I0401 13:27:33.796426 6 log.go:172] (0xc0022b88f0) Data frame received for 5 I0401 13:27:33.796453 6 log.go:172] (0xc001d43ea0) (5) Data frame handling I0401 13:27:33.796482 6 log.go:172] (0xc0022b88f0) Data frame received for 3 I0401 13:27:33.796492 6 log.go:172] (0xc00100b040) (3) Data frame handling I0401 13:27:33.796501 6 log.go:172] (0xc00100b040) (3) Data frame sent I0401 13:27:33.796510 6 log.go:172] (0xc0022b88f0) Data frame received for 3 I0401 13:27:33.796516 6 log.go:172] (0xc00100b040) (3) Data frame handling I0401 13:27:33.798062 6 log.go:172] (0xc0022b88f0) Data frame received for 1 I0401 13:27:33.798095 6 log.go:172] (0xc0030e8b40) (1) Data frame handling I0401 13:27:33.798119 6 log.go:172] (0xc0030e8b40) (1) Data frame sent I0401 13:27:33.798139 6 log.go:172] (0xc0022b88f0) (0xc0030e8b40) Stream removed, broadcasting: 1 I0401 13:27:33.798157 6 log.go:172] (0xc0022b88f0) Go away received I0401 13:27:33.798241 6 log.go:172] (0xc0022b88f0) (0xc0030e8b40) Stream removed, broadcasting: 1 I0401 13:27:33.798257 6 log.go:172] (0xc0022b88f0) (0xc00100b040) Stream removed, broadcasting: 3 I0401 13:27:33.798264 6 log.go:172] (0xc0022b88f0) (0xc001d43ea0) Stream removed, broadcasting: 5 Apr 1 13:27:33.798: INFO: Exec stderr: "" Apr 1 13:27:33.798: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8325 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:27:33.798: INFO: >>> kubeConfig: /root/.kube/config I0401 13:27:33.825775 6 log.go:172] (0xc0022b9130) (0xc0030e8e60) Create stream I0401 13:27:33.825799 6 log.go:172] (0xc0022b9130) (0xc0030e8e60) Stream added, broadcasting: 1 I0401 13:27:33.828314 6 log.go:172] (0xc0022b9130) Reply frame received for 1 I0401 13:27:33.828370 6 log.go:172] (0xc0022b9130) (0xc001d43f40) Create stream I0401 13:27:33.828389 6 log.go:172] (0xc0022b9130) (0xc001d43f40) Stream added, broadcasting: 3 I0401 13:27:33.829512 6 log.go:172] (0xc0022b9130) Reply frame received for 3 I0401 13:27:33.829549 6 log.go:172] (0xc0022b9130) (0xc0009280a0) Create stream I0401 13:27:33.829564 6 log.go:172] (0xc0022b9130) (0xc0009280a0) Stream added, broadcasting: 5 I0401 13:27:33.830516 6 log.go:172] (0xc0022b9130) Reply frame received for 5 I0401 13:27:33.905947 6 log.go:172] (0xc0022b9130) Data frame received for 5 I0401 13:27:33.905986 6 log.go:172] (0xc0009280a0) (5) Data frame handling I0401 13:27:33.906005 6 log.go:172] (0xc0022b9130) Data frame received for 3 I0401 13:27:33.906011 6 log.go:172] (0xc001d43f40) (3) Data frame handling I0401 13:27:33.906019 6 log.go:172] (0xc001d43f40) (3) Data frame sent I0401 13:27:33.906035 6 log.go:172] (0xc0022b9130) Data frame received for 3 I0401 13:27:33.906043 6 log.go:172] (0xc001d43f40) (3) Data frame handling I0401 13:27:33.907625 6 log.go:172] (0xc0022b9130) Data frame received for 1 I0401 13:27:33.907676 6 log.go:172] (0xc0030e8e60) (1) Data frame handling I0401 13:27:33.907697 6 log.go:172] (0xc0030e8e60) (1) Data frame sent I0401 13:27:33.907716 6 log.go:172] (0xc0022b9130) (0xc0030e8e60) Stream removed, broadcasting: 1 I0401 13:27:33.907794 6 log.go:172] (0xc0022b9130) (0xc0030e8e60) Stream removed, broadcasting: 1 I0401 13:27:33.907813 6 log.go:172] (0xc0022b9130) (0xc001d43f40) Stream removed, broadcasting: 3 I0401 13:27:33.907820 6 log.go:172] (0xc0022b9130) (0xc0009280a0) Stream removed, broadcasting: 5 Apr 1 13:27:33.907: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:27:33.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0401 13:27:33.907908 6 log.go:172] (0xc0022b9130) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-8325" for this suite. Apr 1 13:28:23.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:28:24.015: INFO: namespace e2e-kubelet-etc-hosts-8325 deletion completed in 50.103394216s • [SLOW TEST:61.311 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:28:24.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6512/configmap-test-7adcaaa5-b04c-4abb-86f2-9ccc0ec0ce3e STEP: Creating a pod to test consume configMaps Apr 1 13:28:24.116: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078" in namespace "configmap-6512" to be "success or failure" Apr 1 13:28:24.127: INFO: Pod "pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654738ms Apr 1 13:28:26.131: INFO: Pod "pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01490686s Apr 1 13:28:28.135: INFO: Pod "pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018905427s STEP: Saw pod success Apr 1 13:28:28.135: INFO: Pod "pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078" satisfied condition "success or failure" Apr 1 13:28:28.138: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078 container env-test: STEP: delete the pod Apr 1 13:28:28.174: INFO: Waiting for pod pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078 to disappear Apr 1 13:28:28.187: INFO: Pod pod-configmaps-ef15f5f5-8418-4996-bc8c-9fd4ecf58078 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:28:28.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6512" for this suite. Apr 1 13:28:34.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:28:34.279: INFO: namespace configmap-6512 deletion completed in 6.088505583s • [SLOW TEST:10.264 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:28:34.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:28:34.352: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:28:35.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7276" for this suite. Apr 1 13:28:41.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:28:41.584: INFO: namespace custom-resource-definition-7276 deletion completed in 6.102525006s • [SLOW TEST:7.304 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:28:41.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c064aa53-b115-4a05-bbfc-37d1dd3afe2a STEP: Creating a pod to test consume secrets Apr 1 13:28:41.651: INFO: Waiting up to 5m0s for pod "pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf" in namespace "secrets-2872" to be "success or failure" Apr 1 13:28:41.655: INFO: Pod "pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.230875ms Apr 1 13:28:43.659: INFO: Pod "pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007487808s Apr 1 13:28:45.662: INFO: Pod "pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010849615s STEP: Saw pod success Apr 1 13:28:45.662: INFO: Pod "pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf" satisfied condition "success or failure" Apr 1 13:28:45.665: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf container secret-env-test: STEP: delete the pod Apr 1 13:28:45.737: INFO: Waiting for pod pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf to disappear Apr 1 13:28:45.750: INFO: Pod pod-secrets-bab24b56-c939-4b18-8881-faf3085e86bf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:28:45.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2872" for this suite. Apr 1 13:28:51.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:28:51.860: INFO: namespace secrets-2872 deletion completed in 6.105972741s • [SLOW TEST:10.276 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:28:51.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:28:55.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8700" for this suite. Apr 1 13:29:45.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:29:46.057: INFO: namespace kubelet-test-8700 deletion completed in 50.09896067s • [SLOW TEST:54.196 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:29:46.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 1 13:29:46.216: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 1 13:29:46.252: INFO: Waiting for terminating namespaces to be deleted... Apr 1 13:29:46.255: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 1 13:29:46.263: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.263: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 13:29:46.263: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.263: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 13:29:46.263: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 1 13:29:46.269: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.269: INFO: Container coredns ready: true, restart count 0 Apr 1 13:29:46.269: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.269: INFO: Container coredns ready: true, restart count 0 Apr 1 13:29:46.269: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.269: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 13:29:46.269: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 1 13:29:46.269: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-20cfa00e-2aa6-4116-b5d4-77bea18f3aa4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-20cfa00e-2aa6-4116-b5d4-77bea18f3aa4 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-20cfa00e-2aa6-4116-b5d4-77bea18f3aa4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:29:54.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8400" for this suite. Apr 1 13:30:12.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:30:12.507: INFO: namespace sched-pred-8400 deletion completed in 18.089663508s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.450 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:30:12.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 1 13:30:12.598: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036391,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 1 13:30:12.598: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036392,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 1 13:30:12.598: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036393,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 1 13:30:22.646: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036414,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 1 13:30:22.646: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036415,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 1 13:30:22.646: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4451,SelfLink:/api/v1/namespaces/watch-4451/configmaps/e2e-watch-test-label-changed,UID:fb9a3ecc-d6c4-4f4e-a075-fb4a3026472d,ResourceVersion:3036416,Generation:0,CreationTimestamp:2020-04-01 13:30:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:30:22.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4451" for this suite. Apr 1 13:30:28.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:30:28.743: INFO: namespace watch-4451 deletion completed in 6.093286694s • [SLOW TEST:16.236 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:30:28.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-4e37e076-a955-4641-a465-dcf49ce0660c STEP: Creating secret with name s-test-opt-upd-36e6e2ae-eb0c-4ac5-9155-211296e042e5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4e37e076-a955-4641-a465-dcf49ce0660c STEP: Updating secret s-test-opt-upd-36e6e2ae-eb0c-4ac5-9155-211296e042e5 STEP: Creating secret with name s-test-opt-create-0c6ca79b-f284-4a9b-94d1-0cb1788211ad STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:31:51.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2339" for this suite. Apr 1 13:32:13.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:32:13.394: INFO: namespace secrets-2339 deletion completed in 22.108795149s • [SLOW TEST:104.651 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:32:13.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-df2f7594-18b6-425a-aa63-231353e75051 STEP: Creating a pod to test consume configMaps Apr 1 13:32:13.501: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f" in namespace "projected-9108" to be "success or failure" Apr 1 13:32:13.507: INFO: Pod "pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.524744ms Apr 1 13:32:15.511: INFO: Pod "pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009569705s Apr 1 13:32:17.516: INFO: Pod "pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014218698s STEP: Saw pod success Apr 1 13:32:17.516: INFO: Pod "pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f" satisfied condition "success or failure" Apr 1 13:32:17.519: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f container projected-configmap-volume-test: STEP: delete the pod Apr 1 13:32:17.554: INFO: Waiting for pod pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f to disappear Apr 1 13:32:17.582: INFO: Pod pod-projected-configmaps-be2a8611-cb1f-4b12-af93-22fe6cd3d40f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:32:17.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9108" for this suite. Apr 1 13:32:23.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:32:23.681: INFO: namespace projected-9108 deletion completed in 6.094707598s • [SLOW TEST:10.286 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:32:23.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:32:23.754: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 1 13:32:23.760: INFO: Number of nodes with available pods: 0 Apr 1 13:32:23.760: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 1 13:32:23.822: INFO: Number of nodes with available pods: 0 Apr 1 13:32:23.822: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:24.826: INFO: Number of nodes with available pods: 0 Apr 1 13:32:24.826: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:25.827: INFO: Number of nodes with available pods: 0 Apr 1 13:32:25.827: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:26.827: INFO: Number of nodes with available pods: 0 Apr 1 13:32:26.827: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:27.827: INFO: Number of nodes with available pods: 1 Apr 1 13:32:27.827: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 1 13:32:27.880: INFO: Number of nodes with available pods: 1 Apr 1 13:32:27.880: INFO: Number of running nodes: 0, number of available pods: 1 Apr 1 13:32:28.893: INFO: Number of nodes with available pods: 0 Apr 1 13:32:28.893: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 1 13:32:28.900: INFO: Number of nodes with available pods: 0 Apr 1 13:32:28.900: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:29.905: INFO: Number of nodes with available pods: 0 Apr 1 13:32:29.905: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:30.911: INFO: Number of nodes with available pods: 0 Apr 1 13:32:30.911: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:31.904: INFO: Number of nodes with available pods: 0 Apr 1 13:32:31.904: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:32.905: INFO: Number of nodes with available pods: 0 Apr 1 13:32:32.905: INFO: Node iruya-worker is running more than one daemon pod Apr 1 13:32:33.905: INFO: Number of nodes with available pods: 1 Apr 1 13:32:33.905: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6913, will wait for the garbage collector to delete the pods Apr 1 13:32:33.971: INFO: Deleting DaemonSet.extensions daemon-set took: 7.229015ms Apr 1 13:32:34.271: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276762ms Apr 1 13:32:42.276: INFO: Number of nodes with available pods: 0 Apr 1 13:32:42.276: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 13:32:42.279: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6913/daemonsets","resourceVersion":"3036814"},"items":null} Apr 1 13:32:42.282: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6913/pods","resourceVersion":"3036814"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:32:42.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6913" for this suite. Apr 1 13:32:48.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:32:48.411: INFO: namespace daemonsets-6913 deletion completed in 6.095612422s • [SLOW TEST:24.730 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:32:48.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 1 13:32:48.473: INFO: Waiting up to 5m0s for pod "var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa" in namespace "var-expansion-1334" to be "success or failure" Apr 1 13:32:48.493: INFO: Pod "var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 19.829408ms Apr 1 13:32:50.497: INFO: Pod "var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024339793s Apr 1 13:32:52.502: INFO: Pod "var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028610614s STEP: Saw pod success Apr 1 13:32:52.502: INFO: Pod "var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa" satisfied condition "success or failure" Apr 1 13:32:52.504: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa container dapi-container: STEP: delete the pod Apr 1 13:32:52.521: INFO: Waiting for pod var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa to disappear Apr 1 13:32:52.525: INFO: Pod var-expansion-2f8034b2-e7fb-412b-8201-822593f15aaa no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:32:52.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1334" for this suite. Apr 1 13:32:58.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:32:58.618: INFO: namespace var-expansion-1334 deletion completed in 6.089187293s • [SLOW TEST:10.206 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:32:58.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:33:16.719: INFO: Container started at 2020-04-01 13:33:00 +0000 UTC, pod became ready at 2020-04-01 13:33:15 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:33:16.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1534" for this suite. Apr 1 13:33:38.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:33:38.818: INFO: namespace container-probe-1534 deletion completed in 22.095165872s • [SLOW TEST:40.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:33:38.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 1 13:33:38.899: INFO: Waiting up to 5m0s for pod "client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222" in namespace "containers-7250" to be "success or failure" Apr 1 13:33:38.903: INFO: Pod "client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222": Phase="Pending", Reason="", readiness=false. Elapsed: 3.791348ms Apr 1 13:33:40.907: INFO: Pod "client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007605794s Apr 1 13:33:42.911: INFO: Pod "client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011982807s STEP: Saw pod success Apr 1 13:33:42.911: INFO: Pod "client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222" satisfied condition "success or failure" Apr 1 13:33:42.915: INFO: Trying to get logs from node iruya-worker2 pod client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222 container test-container: STEP: delete the pod Apr 1 13:33:42.948: INFO: Waiting for pod client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222 to disappear Apr 1 13:33:42.963: INFO: Pod client-containers-b4aaa96d-c8b5-43cf-95d4-1a4f9c2ce222 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:33:42.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7250" for this suite. Apr 1 13:33:48.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:33:49.054: INFO: namespace containers-7250 deletion completed in 6.087713442s • [SLOW TEST:10.236 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:33:49.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 1 13:33:57.214: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 1 13:33:57.235: INFO: Pod pod-with-poststart-http-hook still exists Apr 1 13:33:59.236: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 1 13:33:59.240: INFO: Pod pod-with-poststart-http-hook still exists Apr 1 13:34:01.236: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 1 13:34:01.240: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:34:01.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-146" for this suite. Apr 1 13:34:23.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:34:23.358: INFO: namespace container-lifecycle-hook-146 deletion completed in 22.114227624s • [SLOW TEST:34.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:34:23.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 1 13:34:23.420: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037148,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 1 13:34:23.420: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037148,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 1 13:34:33.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037169,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 1 13:34:33.429: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037169,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 1 13:34:43.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037189,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 1 13:34:43.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037189,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 1 13:34:53.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037210,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 1 13:34:53.447: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-a,UID:34437377-f2ee-4700-ad33-66d095fe0390,ResourceVersion:3037210,Generation:0,CreationTimestamp:2020-04-01 13:34:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 1 13:35:03.498: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-b,UID:814d7e7b-e234-4677-a32e-65e6c338f045,ResourceVersion:3037230,Generation:0,CreationTimestamp:2020-04-01 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 1 13:35:03.498: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-b,UID:814d7e7b-e234-4677-a32e-65e6c338f045,ResourceVersion:3037230,Generation:0,CreationTimestamp:2020-04-01 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 1 13:35:13.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-b,UID:814d7e7b-e234-4677-a32e-65e6c338f045,ResourceVersion:3037251,Generation:0,CreationTimestamp:2020-04-01 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 1 13:35:13.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1342,SelfLink:/api/v1/namespaces/watch-1342/configmaps/e2e-watch-test-configmap-b,UID:814d7e7b-e234-4677-a32e-65e6c338f045,ResourceVersion:3037251,Generation:0,CreationTimestamp:2020-04-01 13:35:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:35:23.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1342" for this suite. Apr 1 13:35:29.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:35:29.635: INFO: namespace watch-1342 deletion completed in 6.123759501s • [SLOW TEST:66.276 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:35:29.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-6e1c573c-a183-4faf-b031-bfd31c571d75 STEP: Creating a pod to test consume secrets Apr 1 13:35:29.737: INFO: Waiting up to 5m0s for pod "pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888" in namespace "secrets-3353" to be "success or failure" Apr 1 13:35:29.740: INFO: Pod "pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888": Phase="Pending", Reason="", readiness=false. Elapsed: 3.193295ms Apr 1 13:35:31.747: INFO: Pod "pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0102236s Apr 1 13:35:33.751: INFO: Pod "pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014179049s STEP: Saw pod success Apr 1 13:35:33.751: INFO: Pod "pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888" satisfied condition "success or failure" Apr 1 13:35:33.754: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888 container secret-volume-test: STEP: delete the pod Apr 1 13:35:33.790: INFO: Waiting for pod pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888 to disappear Apr 1 13:35:33.809: INFO: Pod pod-secrets-1ebcfeba-fa6e-4616-aa6e-359d5b314888 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:35:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3353" for this suite. Apr 1 13:35:39.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:35:39.912: INFO: namespace secrets-3353 deletion completed in 6.098960621s • [SLOW TEST:10.277 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:35:39.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 1 13:35:39.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3458' Apr 1 13:35:42.474: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 1 13:35:42.474: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 1 13:35:42.481: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 1 13:35:42.485: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 1 13:35:42.518: INFO: scanned /root for discovery docs: Apr 1 13:35:42.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3458' Apr 1 13:35:58.309: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 1 13:35:58.309: INFO: stdout: "Created e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb\nScaling up e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 1 13:35:58.309: INFO: stdout: "Created e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb\nScaling up e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 1 13:35:58.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3458' Apr 1 13:35:58.396: INFO: stderr: "" Apr 1 13:35:58.396: INFO: stdout: "e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb-6wb88 " Apr 1 13:35:58.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb-6wb88 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3458' Apr 1 13:35:58.484: INFO: stderr: "" Apr 1 13:35:58.484: INFO: stdout: "true" Apr 1 13:35:58.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb-6wb88 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3458' Apr 1 13:35:58.570: INFO: stderr: "" Apr 1 13:35:58.570: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 1 13:35:58.570: INFO: e2e-test-nginx-rc-365a9123262daa03775b83e4d6b3c8bb-6wb88 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 1 13:35:58.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3458' Apr 1 13:35:58.678: INFO: stderr: "" Apr 1 13:35:58.678: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:35:58.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3458" for this suite. Apr 1 13:36:20.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:36:20.799: INFO: namespace kubectl-3458 deletion completed in 22.101000414s • [SLOW TEST:40.888 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:36:20.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 1 13:36:20.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9914' Apr 1 13:36:21.112: INFO: stderr: "" Apr 1 13:36:21.112: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 13:36:21.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9914' Apr 1 13:36:21.212: INFO: stderr: "" Apr 1 13:36:21.212: INFO: stdout: "update-demo-nautilus-f9l7s update-demo-nautilus-p82xq " Apr 1 13:36:21.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9l7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9914' Apr 1 13:36:21.307: INFO: stderr: "" Apr 1 13:36:21.307: INFO: stdout: "" Apr 1 13:36:21.307: INFO: update-demo-nautilus-f9l7s is created but not running Apr 1 13:36:26.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9914' Apr 1 13:36:26.402: INFO: stderr: "" Apr 1 13:36:26.402: INFO: stdout: "update-demo-nautilus-f9l7s update-demo-nautilus-p82xq " Apr 1 13:36:26.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9l7s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9914' Apr 1 13:36:26.490: INFO: stderr: "" Apr 1 13:36:26.490: INFO: stdout: "true" Apr 1 13:36:26.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f9l7s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9914' Apr 1 13:36:26.585: INFO: stderr: "" Apr 1 13:36:26.585: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:36:26.585: INFO: validating pod update-demo-nautilus-f9l7s Apr 1 13:36:26.589: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:36:26.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:36:26.589: INFO: update-demo-nautilus-f9l7s is verified up and running Apr 1 13:36:26.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p82xq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9914' Apr 1 13:36:26.689: INFO: stderr: "" Apr 1 13:36:26.689: INFO: stdout: "true" Apr 1 13:36:26.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p82xq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9914' Apr 1 13:36:26.775: INFO: stderr: "" Apr 1 13:36:26.775: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 13:36:26.775: INFO: validating pod update-demo-nautilus-p82xq Apr 1 13:36:26.779: INFO: got data: { "image": "nautilus.jpg" } Apr 1 13:36:26.779: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 13:36:26.779: INFO: update-demo-nautilus-p82xq is verified up and running STEP: using delete to clean up resources Apr 1 13:36:26.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9914' Apr 1 13:36:26.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 1 13:36:26.885: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 1 13:36:26.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9914' Apr 1 13:36:26.987: INFO: stderr: "No resources found.\n" Apr 1 13:36:26.987: INFO: stdout: "" Apr 1 13:36:26.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9914 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:36:27.082: INFO: stderr: "" Apr 1 13:36:27.083: INFO: stdout: "update-demo-nautilus-f9l7s\nupdate-demo-nautilus-p82xq\n" Apr 1 13:36:27.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9914' Apr 1 13:36:27.669: INFO: stderr: "No resources found.\n" Apr 1 13:36:27.669: INFO: stdout: "" Apr 1 13:36:27.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9914 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 1 13:36:27.762: INFO: stderr: "" Apr 1 13:36:27.762: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:36:27.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9914" for this suite. Apr 1 13:36:33.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:36:33.900: INFO: namespace kubectl-9914 deletion completed in 6.134148846s • [SLOW TEST:13.101 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:36:33.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-8qxm STEP: Creating a pod to test atomic-volume-subpath Apr 1 13:36:33.983: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8qxm" in namespace "subpath-7933" to be "success or failure" Apr 1 13:36:33.987: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.789275ms Apr 1 13:36:35.991: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007424675s Apr 1 13:36:37.994: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 4.011105171s Apr 1 13:36:39.999: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 6.015354132s Apr 1 13:36:42.003: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 8.019933042s Apr 1 13:36:44.012: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 10.029019001s Apr 1 13:36:46.016: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 12.03271832s Apr 1 13:36:48.020: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 14.036964282s Apr 1 13:36:50.024: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 16.041228828s Apr 1 13:36:52.028: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 18.044988356s Apr 1 13:36:54.032: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 20.04914993s Apr 1 13:36:56.037: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Running", Reason="", readiness=true. Elapsed: 22.053937793s Apr 1 13:36:58.041: INFO: Pod "pod-subpath-test-projected-8qxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057984339s STEP: Saw pod success Apr 1 13:36:58.041: INFO: Pod "pod-subpath-test-projected-8qxm" satisfied condition "success or failure" Apr 1 13:36:58.044: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-8qxm container test-container-subpath-projected-8qxm: STEP: delete the pod Apr 1 13:36:58.079: INFO: Waiting for pod pod-subpath-test-projected-8qxm to disappear Apr 1 13:36:58.091: INFO: Pod pod-subpath-test-projected-8qxm no longer exists STEP: Deleting pod pod-subpath-test-projected-8qxm Apr 1 13:36:58.091: INFO: Deleting pod "pod-subpath-test-projected-8qxm" in namespace "subpath-7933" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:36:58.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7933" for this suite. Apr 1 13:37:04.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:04.192: INFO: namespace subpath-7933 deletion completed in 6.094828361s • [SLOW TEST:30.290 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:04.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:37:04.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7" in namespace "downward-api-760" to be "success or failure" Apr 1 13:37:04.362: INFO: Pod "downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.239879ms Apr 1 13:37:06.367: INFO: Pod "downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041752783s Apr 1 13:37:08.371: INFO: Pod "downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045764311s STEP: Saw pod success Apr 1 13:37:08.371: INFO: Pod "downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7" satisfied condition "success or failure" Apr 1 13:37:08.373: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7 container client-container: STEP: delete the pod Apr 1 13:37:08.386: INFO: Waiting for pod downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7 to disappear Apr 1 13:37:08.402: INFO: Pod downwardapi-volume-92c1abf7-0d0b-4bc8-85e8-31db7b8decf7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:08.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-760" for this suite. Apr 1 13:37:14.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:14.488: INFO: namespace downward-api-760 deletion completed in 6.082592289s • [SLOW TEST:10.296 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:14.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 1 13:37:14.573: INFO: Waiting up to 5m0s for pod "client-containers-59195e66-3246-4bcd-973e-1222e33c0730" in namespace "containers-5649" to be "success or failure" Apr 1 13:37:14.577: INFO: Pod "client-containers-59195e66-3246-4bcd-973e-1222e33c0730": Phase="Pending", Reason="", readiness=false. Elapsed: 3.298081ms Apr 1 13:37:16.581: INFO: Pod "client-containers-59195e66-3246-4bcd-973e-1222e33c0730": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007655945s Apr 1 13:37:18.585: INFO: Pod "client-containers-59195e66-3246-4bcd-973e-1222e33c0730": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011551671s STEP: Saw pod success Apr 1 13:37:18.585: INFO: Pod "client-containers-59195e66-3246-4bcd-973e-1222e33c0730" satisfied condition "success or failure" Apr 1 13:37:18.588: INFO: Trying to get logs from node iruya-worker2 pod client-containers-59195e66-3246-4bcd-973e-1222e33c0730 container test-container: STEP: delete the pod Apr 1 13:37:18.623: INFO: Waiting for pod client-containers-59195e66-3246-4bcd-973e-1222e33c0730 to disappear Apr 1 13:37:18.635: INFO: Pod client-containers-59195e66-3246-4bcd-973e-1222e33c0730 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5649" for this suite. Apr 1 13:37:24.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:24.727: INFO: namespace containers-5649 deletion completed in 6.089153966s • [SLOW TEST:10.238 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:24.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-1c2cf85b-b8ba-41d8-a65f-c1fc6304f3f1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:24.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2214" for this suite. Apr 1 13:37:30.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:30.953: INFO: namespace secrets-2214 deletion completed in 6.143615887s • [SLOW TEST:6.225 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:30.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 1 13:37:31.008: INFO: Waiting up to 5m0s for pod "downward-api-bcfbefd0-a978-484d-b224-68ada0785047" in namespace "downward-api-156" to be "success or failure" Apr 1 13:37:31.041: INFO: Pod "downward-api-bcfbefd0-a978-484d-b224-68ada0785047": Phase="Pending", Reason="", readiness=false. Elapsed: 33.20326ms Apr 1 13:37:33.096: INFO: Pod "downward-api-bcfbefd0-a978-484d-b224-68ada0785047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087235556s Apr 1 13:37:35.099: INFO: Pod "downward-api-bcfbefd0-a978-484d-b224-68ada0785047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091068925s STEP: Saw pod success Apr 1 13:37:35.099: INFO: Pod "downward-api-bcfbefd0-a978-484d-b224-68ada0785047" satisfied condition "success or failure" Apr 1 13:37:35.102: INFO: Trying to get logs from node iruya-worker pod downward-api-bcfbefd0-a978-484d-b224-68ada0785047 container dapi-container: STEP: delete the pod Apr 1 13:37:35.122: INFO: Waiting for pod downward-api-bcfbefd0-a978-484d-b224-68ada0785047 to disappear Apr 1 13:37:35.156: INFO: Pod downward-api-bcfbefd0-a978-484d-b224-68ada0785047 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:35.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-156" for this suite. Apr 1 13:37:41.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:41.240: INFO: namespace downward-api-156 deletion completed in 6.081093783s • [SLOW TEST:10.287 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:41.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-dab2f440-0720-4c8d-9676-87ab0816c0e4 STEP: Creating a pod to test consume configMaps Apr 1 13:37:41.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9" in namespace "configmap-3333" to be "success or failure" Apr 1 13:37:41.320: INFO: Pod "pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.376271ms Apr 1 13:37:43.324: INFO: Pod "pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510034s Apr 1 13:37:45.328: INFO: Pod "pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011650318s STEP: Saw pod success Apr 1 13:37:45.328: INFO: Pod "pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9" satisfied condition "success or failure" Apr 1 13:37:45.331: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9 container configmap-volume-test: STEP: delete the pod Apr 1 13:37:45.376: INFO: Waiting for pod pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9 to disappear Apr 1 13:37:45.404: INFO: Pod pod-configmaps-ee51a895-ccc6-4b93-80f7-231a9a299ca9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:45.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3333" for this suite. Apr 1 13:37:51.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:37:51.498: INFO: namespace configmap-3333 deletion completed in 6.090585244s • [SLOW TEST:10.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:37:51.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 1 13:37:56.162: INFO: Successfully updated pod "labelsupdate0e044d95-17b7-4532-bb6f-48d41179911a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:37:58.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4167" for this suite. Apr 1 13:38:20.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:38:20.320: INFO: namespace downward-api-4167 deletion completed in 22.142266681s • [SLOW TEST:28.822 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:38:20.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:38:20.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be" in namespace "downward-api-4698" to be "success or failure" Apr 1 13:38:20.385: INFO: Pod "downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.36347ms Apr 1 13:38:22.390: INFO: Pod "downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017226548s Apr 1 13:38:24.394: INFO: Pod "downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02155917s STEP: Saw pod success Apr 1 13:38:24.394: INFO: Pod "downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be" satisfied condition "success or failure" Apr 1 13:38:24.397: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be container client-container: STEP: delete the pod Apr 1 13:38:24.426: INFO: Waiting for pod downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be to disappear Apr 1 13:38:24.439: INFO: Pod downwardapi-volume-ec871d05-fe58-4734-9946-82b4ce9027be no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:38:24.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4698" for this suite. Apr 1 13:38:30.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:38:30.537: INFO: namespace downward-api-4698 deletion completed in 6.094131795s • [SLOW TEST:10.216 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:38:30.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5209 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 1 13:38:30.574: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 1 13:38:54.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.205:8080/dial?request=hostName&protocol=udp&host=10.244.1.186&port=8081&tries=1'] Namespace:pod-network-test-5209 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:38:54.682: INFO: >>> kubeConfig: /root/.kube/config I0401 13:38:54.716800 6 log.go:172] (0xc002120210) (0xc000320b40) Create stream I0401 13:38:54.716830 6 log.go:172] (0xc002120210) (0xc000320b40) Stream added, broadcasting: 1 I0401 13:38:54.718921 6 log.go:172] (0xc002120210) Reply frame received for 1 I0401 13:38:54.718968 6 log.go:172] (0xc002120210) (0xc00110e0a0) Create stream I0401 13:38:54.718985 6 log.go:172] (0xc002120210) (0xc00110e0a0) Stream added, broadcasting: 3 I0401 13:38:54.720280 6 log.go:172] (0xc002120210) Reply frame received for 3 I0401 13:38:54.720333 6 log.go:172] (0xc002120210) (0xc0028ee3c0) Create stream I0401 13:38:54.720350 6 log.go:172] (0xc002120210) (0xc0028ee3c0) Stream added, broadcasting: 5 I0401 13:38:54.721585 6 log.go:172] (0xc002120210) Reply frame received for 5 I0401 13:38:54.833015 6 log.go:172] (0xc002120210) Data frame received for 3 I0401 13:38:54.833050 6 log.go:172] (0xc00110e0a0) (3) Data frame handling I0401 13:38:54.833069 6 log.go:172] (0xc00110e0a0) (3) Data frame sent I0401 13:38:54.834095 6 log.go:172] (0xc002120210) Data frame received for 5 I0401 13:38:54.834121 6 log.go:172] (0xc0028ee3c0) (5) Data frame handling I0401 13:38:54.834148 6 log.go:172] (0xc002120210) Data frame received for 3 I0401 13:38:54.834160 6 log.go:172] (0xc00110e0a0) (3) Data frame handling I0401 13:38:54.835839 6 log.go:172] (0xc002120210) Data frame received for 1 I0401 13:38:54.835874 6 log.go:172] (0xc000320b40) (1) Data frame handling I0401 13:38:54.835905 6 log.go:172] (0xc000320b40) (1) Data frame sent I0401 13:38:54.835930 6 log.go:172] (0xc002120210) (0xc000320b40) Stream removed, broadcasting: 1 I0401 13:38:54.835958 6 log.go:172] (0xc002120210) Go away received I0401 13:38:54.836069 6 log.go:172] (0xc002120210) (0xc000320b40) Stream removed, broadcasting: 1 I0401 13:38:54.836084 6 log.go:172] (0xc002120210) (0xc00110e0a0) Stream removed, broadcasting: 3 I0401 13:38:54.836091 6 log.go:172] (0xc002120210) (0xc0028ee3c0) Stream removed, broadcasting: 5 Apr 1 13:38:54.836: INFO: Waiting for endpoints: map[] Apr 1 13:38:54.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.205:8080/dial?request=hostName&protocol=udp&host=10.244.2.204&port=8081&tries=1'] Namespace:pod-network-test-5209 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:38:54.839: INFO: >>> kubeConfig: /root/.kube/config I0401 13:38:54.873794 6 log.go:172] (0xc000746e70) (0xc00110e3c0) Create stream I0401 13:38:54.873825 6 log.go:172] (0xc000746e70) (0xc00110e3c0) Stream added, broadcasting: 1 I0401 13:38:54.875623 6 log.go:172] (0xc000746e70) Reply frame received for 1 I0401 13:38:54.875701 6 log.go:172] (0xc000746e70) (0xc0022400a0) Create stream I0401 13:38:54.875721 6 log.go:172] (0xc000746e70) (0xc0022400a0) Stream added, broadcasting: 3 I0401 13:38:54.876484 6 log.go:172] (0xc000746e70) Reply frame received for 3 I0401 13:38:54.876514 6 log.go:172] (0xc000746e70) (0xc00110e500) Create stream I0401 13:38:54.876524 6 log.go:172] (0xc000746e70) (0xc00110e500) Stream added, broadcasting: 5 I0401 13:38:54.877367 6 log.go:172] (0xc000746e70) Reply frame received for 5 I0401 13:38:54.940727 6 log.go:172] (0xc000746e70) Data frame received for 3 I0401 13:38:54.940769 6 log.go:172] (0xc0022400a0) (3) Data frame handling I0401 13:38:54.940795 6 log.go:172] (0xc0022400a0) (3) Data frame sent I0401 13:38:54.941680 6 log.go:172] (0xc000746e70) Data frame received for 3 I0401 13:38:54.941713 6 log.go:172] (0xc0022400a0) (3) Data frame handling I0401 13:38:54.941753 6 log.go:172] (0xc000746e70) Data frame received for 5 I0401 13:38:54.941789 6 log.go:172] (0xc00110e500) (5) Data frame handling I0401 13:38:54.943113 6 log.go:172] (0xc000746e70) Data frame received for 1 I0401 13:38:54.943147 6 log.go:172] (0xc00110e3c0) (1) Data frame handling I0401 13:38:54.943177 6 log.go:172] (0xc00110e3c0) (1) Data frame sent I0401 13:38:54.943193 6 log.go:172] (0xc000746e70) (0xc00110e3c0) Stream removed, broadcasting: 1 I0401 13:38:54.943211 6 log.go:172] (0xc000746e70) Go away received I0401 13:38:54.943321 6 log.go:172] (0xc000746e70) (0xc00110e3c0) Stream removed, broadcasting: 1 I0401 13:38:54.943338 6 log.go:172] (0xc000746e70) (0xc0022400a0) Stream removed, broadcasting: 3 I0401 13:38:54.943345 6 log.go:172] (0xc000746e70) (0xc00110e500) Stream removed, broadcasting: 5 Apr 1 13:38:54.943: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:38:54.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5209" for this suite. Apr 1 13:39:18.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:39:19.069: INFO: namespace pod-network-test-5209 deletion completed in 24.121893191s • [SLOW TEST:48.532 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:39:19.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-648 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 1 13:39:19.131: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 1 13:39:45.233: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.187 8081 | grep -v '^\s*$'] Namespace:pod-network-test-648 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:39:45.233: INFO: >>> kubeConfig: /root/.kube/config I0401 13:39:45.262963 6 log.go:172] (0xc001832580) (0xc002097ae0) Create stream I0401 13:39:45.262987 6 log.go:172] (0xc001832580) (0xc002097ae0) Stream added, broadcasting: 1 I0401 13:39:45.264529 6 log.go:172] (0xc001832580) Reply frame received for 1 I0401 13:39:45.264570 6 log.go:172] (0xc001832580) (0xc0031417c0) Create stream I0401 13:39:45.264580 6 log.go:172] (0xc001832580) (0xc0031417c0) Stream added, broadcasting: 3 I0401 13:39:45.265518 6 log.go:172] (0xc001832580) Reply frame received for 3 I0401 13:39:45.265540 6 log.go:172] (0xc001832580) (0xc002097b80) Create stream I0401 13:39:45.265554 6 log.go:172] (0xc001832580) (0xc002097b80) Stream added, broadcasting: 5 I0401 13:39:45.266384 6 log.go:172] (0xc001832580) Reply frame received for 5 I0401 13:39:46.326433 6 log.go:172] (0xc001832580) Data frame received for 3 I0401 13:39:46.326466 6 log.go:172] (0xc0031417c0) (3) Data frame handling I0401 13:39:46.326483 6 log.go:172] (0xc0031417c0) (3) Data frame sent I0401 13:39:46.326493 6 log.go:172] (0xc001832580) Data frame received for 3 I0401 13:39:46.326511 6 log.go:172] (0xc0031417c0) (3) Data frame handling I0401 13:39:46.326545 6 log.go:172] (0xc001832580) Data frame received for 5 I0401 13:39:46.326575 6 log.go:172] (0xc002097b80) (5) Data frame handling I0401 13:39:46.328657 6 log.go:172] (0xc001832580) Data frame received for 1 I0401 13:39:46.328684 6 log.go:172] (0xc002097ae0) (1) Data frame handling I0401 13:39:46.328704 6 log.go:172] (0xc002097ae0) (1) Data frame sent I0401 13:39:46.328743 6 log.go:172] (0xc001832580) (0xc002097ae0) Stream removed, broadcasting: 1 I0401 13:39:46.328929 6 log.go:172] (0xc001832580) (0xc002097ae0) Stream removed, broadcasting: 1 I0401 13:39:46.328973 6 log.go:172] (0xc001832580) (0xc0031417c0) Stream removed, broadcasting: 3 I0401 13:39:46.328994 6 log.go:172] (0xc001832580) (0xc002097b80) Stream removed, broadcasting: 5 Apr 1 13:39:46.329: INFO: Found all expected endpoints: [netserver-0] I0401 13:39:46.329069 6 log.go:172] (0xc001832580) Go away received Apr 1 13:39:46.332: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.206 8081 | grep -v '^\s*$'] Namespace:pod-network-test-648 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:39:46.332: INFO: >>> kubeConfig: /root/.kube/config I0401 13:39:46.368297 6 log.go:172] (0xc001d0a160) (0xc00058e8c0) Create stream I0401 13:39:46.368322 6 log.go:172] (0xc001d0a160) (0xc00058e8c0) Stream added, broadcasting: 1 I0401 13:39:46.370314 6 log.go:172] (0xc001d0a160) Reply frame received for 1 I0401 13:39:46.370350 6 log.go:172] (0xc001d0a160) (0xc003141900) Create stream I0401 13:39:46.370365 6 log.go:172] (0xc001d0a160) (0xc003141900) Stream added, broadcasting: 3 I0401 13:39:46.371438 6 log.go:172] (0xc001d0a160) Reply frame received for 3 I0401 13:39:46.371473 6 log.go:172] (0xc001d0a160) (0xc001654000) Create stream I0401 13:39:46.371482 6 log.go:172] (0xc001d0a160) (0xc001654000) Stream added, broadcasting: 5 I0401 13:39:46.372552 6 log.go:172] (0xc001d0a160) Reply frame received for 5 I0401 13:39:47.452984 6 log.go:172] (0xc001d0a160) Data frame received for 5 I0401 13:39:47.453052 6 log.go:172] (0xc001654000) (5) Data frame handling I0401 13:39:47.453106 6 log.go:172] (0xc001d0a160) Data frame received for 3 I0401 13:39:47.453226 6 log.go:172] (0xc003141900) (3) Data frame handling I0401 13:39:47.453484 6 log.go:172] (0xc003141900) (3) Data frame sent I0401 13:39:47.453518 6 log.go:172] (0xc001d0a160) Data frame received for 3 I0401 13:39:47.453537 6 log.go:172] (0xc003141900) (3) Data frame handling I0401 13:39:47.454912 6 log.go:172] (0xc001d0a160) Data frame received for 1 I0401 13:39:47.454945 6 log.go:172] (0xc00058e8c0) (1) Data frame handling I0401 13:39:47.454960 6 log.go:172] (0xc00058e8c0) (1) Data frame sent I0401 13:39:47.454980 6 log.go:172] (0xc001d0a160) (0xc00058e8c0) Stream removed, broadcasting: 1 I0401 13:39:47.454998 6 log.go:172] (0xc001d0a160) Go away received I0401 13:39:47.455065 6 log.go:172] (0xc001d0a160) (0xc00058e8c0) Stream removed, broadcasting: 1 I0401 13:39:47.455082 6 log.go:172] (0xc001d0a160) (0xc003141900) Stream removed, broadcasting: 3 I0401 13:39:47.455101 6 log.go:172] (0xc001d0a160) (0xc001654000) Stream removed, broadcasting: 5 Apr 1 13:39:47.455: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:39:47.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-648" for this suite. Apr 1 13:40:09.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:40:09.581: INFO: namespace pod-network-test-648 deletion completed in 22.122657452s • [SLOW TEST:50.510 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:40:09.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:40:43.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8897" for this suite. Apr 1 13:40:49.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:40:49.311: INFO: namespace container-runtime-8897 deletion completed in 6.094857519s • [SLOW TEST:39.730 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:40:49.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:40:49.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6" in namespace "downward-api-154" to be "success or failure" Apr 1 13:40:49.400: INFO: Pod "downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.563557ms Apr 1 13:40:51.404: INFO: Pod "downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007283563s Apr 1 13:40:53.408: INFO: Pod "downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011863258s STEP: Saw pod success Apr 1 13:40:53.409: INFO: Pod "downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6" satisfied condition "success or failure" Apr 1 13:40:53.411: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6 container client-container: STEP: delete the pod Apr 1 13:40:53.437: INFO: Waiting for pod downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6 to disappear Apr 1 13:40:53.448: INFO: Pod downwardapi-volume-8ba7d3fc-46ee-4f4a-9337-37f5b9f968c6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:40:53.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-154" for this suite. Apr 1 13:40:59.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:40:59.557: INFO: namespace downward-api-154 deletion completed in 6.105183858s • [SLOW TEST:10.245 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:40:59.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0401 13:41:00.707028 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 13:41:00.707: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:41:00.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9313" for this suite. Apr 1 13:41:06.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:41:06.882: INFO: namespace gc-9313 deletion completed in 6.172101328s • [SLOW TEST:7.325 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:41:06.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:41:07.006: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ad7d6ac6-4c2a-4113-9988-6617dc87c7b1", Controller:(*bool)(0xc00313f33a), BlockOwnerDeletion:(*bool)(0xc00313f33b)}} Apr 1 13:41:07.035: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c16afe3b-1cba-4600-9ab0-0f40266f3272", Controller:(*bool)(0xc002a2c742), BlockOwnerDeletion:(*bool)(0xc002a2c743)}} Apr 1 13:41:07.062: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9d27dcfc-d05d-49bb-b653-bccfa869b8db", Controller:(*bool)(0xc002a2c90a), BlockOwnerDeletion:(*bool)(0xc002a2c90b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:41:12.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1205" for this suite. Apr 1 13:41:18.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:41:18.250: INFO: namespace gc-1205 deletion completed in 6.111717238s • [SLOW TEST:11.367 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:41:18.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 1 13:41:18.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2693 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 1 13:41:21.186: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0401 13:41:21.106187 2300 log.go:172] (0xc00098a160) (0xc0009b6140) Create stream\nI0401 13:41:21.106234 2300 log.go:172] (0xc00098a160) (0xc0009b6140) Stream added, broadcasting: 1\nI0401 13:41:21.110582 2300 log.go:172] (0xc00098a160) Reply frame received for 1\nI0401 13:41:21.110628 2300 log.go:172] (0xc00098a160) (0xc0007aac80) Create stream\nI0401 13:41:21.110641 2300 log.go:172] (0xc00098a160) (0xc0007aac80) Stream added, broadcasting: 3\nI0401 13:41:21.111512 2300 log.go:172] (0xc00098a160) Reply frame received for 3\nI0401 13:41:21.111580 2300 log.go:172] (0xc00098a160) (0xc0009b6000) Create stream\nI0401 13:41:21.111613 2300 log.go:172] (0xc00098a160) (0xc0009b6000) Stream added, broadcasting: 5\nI0401 13:41:21.112494 2300 log.go:172] (0xc00098a160) Reply frame received for 5\nI0401 13:41:21.112531 2300 log.go:172] (0xc00098a160) (0xc000918000) Create stream\nI0401 13:41:21.112541 2300 log.go:172] (0xc00098a160) (0xc000918000) Stream added, broadcasting: 7\nI0401 13:41:21.113645 2300 log.go:172] (0xc00098a160) Reply frame received for 7\nI0401 13:41:21.113764 2300 log.go:172] (0xc0007aac80) (3) Writing data frame\nI0401 13:41:21.113908 2300 log.go:172] (0xc0007aac80) (3) Writing data frame\nI0401 13:41:21.114808 2300 log.go:172] (0xc00098a160) Data frame received for 5\nI0401 13:41:21.114835 2300 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0401 13:41:21.114854 2300 log.go:172] (0xc0009b6000) (5) Data frame sent\nI0401 13:41:21.115439 2300 log.go:172] (0xc00098a160) Data frame received for 5\nI0401 13:41:21.115458 2300 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0401 13:41:21.115475 2300 log.go:172] (0xc0009b6000) (5) Data frame sent\nI0401 13:41:21.164466 2300 log.go:172] (0xc00098a160) Data frame received for 5\nI0401 13:41:21.164513 2300 log.go:172] (0xc0009b6000) (5) Data frame handling\nI0401 13:41:21.164825 2300 log.go:172] (0xc00098a160) Data frame received for 7\nI0401 13:41:21.164874 2300 log.go:172] (0xc000918000) (7) Data frame handling\nI0401 13:41:21.165269 2300 log.go:172] (0xc00098a160) Data frame received for 1\nI0401 13:41:21.165441 2300 log.go:172] (0xc0009b6140) (1) Data frame handling\nI0401 13:41:21.165473 2300 log.go:172] (0xc0009b6140) (1) Data frame sent\nI0401 13:41:21.165593 2300 log.go:172] (0xc00098a160) (0xc0009b6140) Stream removed, broadcasting: 1\nI0401 13:41:21.165717 2300 log.go:172] (0xc00098a160) (0xc0009b6140) Stream removed, broadcasting: 1\nI0401 13:41:21.165747 2300 log.go:172] (0xc00098a160) (0xc0007aac80) Stream removed, broadcasting: 3\nI0401 13:41:21.165766 2300 log.go:172] (0xc00098a160) (0xc0009b6000) Stream removed, broadcasting: 5\nI0401 13:41:21.165891 2300 log.go:172] (0xc00098a160) (0xc000918000) Stream removed, broadcasting: 7\nI0401 13:41:21.165980 2300 log.go:172] (0xc00098a160) Go away received\n" Apr 1 13:41:21.186: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:41:23.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2693" for this suite. Apr 1 13:41:33.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:41:33.290: INFO: namespace kubectl-2693 deletion completed in 10.093677046s • [SLOW TEST:15.040 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:41:33.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:41:37.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3008" for this suite. Apr 1 13:42:27.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:42:27.500: INFO: namespace kubelet-test-3008 deletion completed in 50.104736338s • [SLOW TEST:54.209 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:42:27.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-69106000-fcb8-489c-be71-54048d1a463e STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-69106000-fcb8-489c-be71-54048d1a463e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:43:53.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2210" for this suite. Apr 1 13:44:15.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:44:16.044: INFO: namespace projected-2210 deletion completed in 22.09081582s • [SLOW TEST:108.543 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:44:16.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-540, will wait for the garbage collector to delete the pods Apr 1 13:44:20.207: INFO: Deleting Job.batch foo took: 6.373495ms Apr 1 13:44:20.507: INFO: Terminating Job.batch foo pods took: 300.234743ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:45:02.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-540" for this suite. Apr 1 13:45:08.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:45:08.352: INFO: namespace job-540 deletion completed in 6.137753822s • [SLOW TEST:52.308 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:45:08.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:45:08.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb" in namespace "projected-843" to be "success or failure" Apr 1 13:45:08.411: INFO: Pod "downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037295ms Apr 1 13:45:10.415: INFO: Pod "downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00738677s Apr 1 13:45:12.419: INFO: Pod "downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011583679s STEP: Saw pod success Apr 1 13:45:12.419: INFO: Pod "downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb" satisfied condition "success or failure" Apr 1 13:45:12.423: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb container client-container: STEP: delete the pod Apr 1 13:45:12.443: INFO: Waiting for pod downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb to disappear Apr 1 13:45:12.447: INFO: Pod downwardapi-volume-7675021f-2f8a-4542-9e3e-eb62388b4dcb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:45:12.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-843" for this suite. Apr 1 13:45:18.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:45:18.603: INFO: namespace projected-843 deletion completed in 6.152757565s • [SLOW TEST:10.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:45:18.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1682 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 1 13:45:18.654: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 1 13:45:40.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.216:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1682 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:45:40.809: INFO: >>> kubeConfig: /root/.kube/config I0401 13:45:40.847757 6 log.go:172] (0xc000d526e0) (0xc001e69e00) Create stream I0401 13:45:40.847793 6 log.go:172] (0xc000d526e0) (0xc001e69e00) Stream added, broadcasting: 1 I0401 13:45:40.850517 6 log.go:172] (0xc000d526e0) Reply frame received for 1 I0401 13:45:40.850558 6 log.go:172] (0xc000d526e0) (0xc000237220) Create stream I0401 13:45:40.850573 6 log.go:172] (0xc000d526e0) (0xc000237220) Stream added, broadcasting: 3 I0401 13:45:40.851508 6 log.go:172] (0xc000d526e0) Reply frame received for 3 I0401 13:45:40.851550 6 log.go:172] (0xc000d526e0) (0xc0016541e0) Create stream I0401 13:45:40.851568 6 log.go:172] (0xc000d526e0) (0xc0016541e0) Stream added, broadcasting: 5 I0401 13:45:40.852654 6 log.go:172] (0xc000d526e0) Reply frame received for 5 I0401 13:45:40.959649 6 log.go:172] (0xc000d526e0) Data frame received for 3 I0401 13:45:40.959672 6 log.go:172] (0xc000237220) (3) Data frame handling I0401 13:45:40.959691 6 log.go:172] (0xc000237220) (3) Data frame sent I0401 13:45:40.959711 6 log.go:172] (0xc000d526e0) Data frame received for 3 I0401 13:45:40.959724 6 log.go:172] (0xc000237220) (3) Data frame handling I0401 13:45:40.960113 6 log.go:172] (0xc000d526e0) Data frame received for 5 I0401 13:45:40.960136 6 log.go:172] (0xc0016541e0) (5) Data frame handling I0401 13:45:40.961754 6 log.go:172] (0xc000d526e0) Data frame received for 1 I0401 13:45:40.961780 6 log.go:172] (0xc001e69e00) (1) Data frame handling I0401 13:45:40.961801 6 log.go:172] (0xc001e69e00) (1) Data frame sent I0401 13:45:40.961817 6 log.go:172] (0xc000d526e0) (0xc001e69e00) Stream removed, broadcasting: 1 I0401 13:45:40.961928 6 log.go:172] (0xc000d526e0) Go away received I0401 13:45:40.961979 6 log.go:172] (0xc000d526e0) (0xc001e69e00) Stream removed, broadcasting: 1 I0401 13:45:40.962011 6 log.go:172] (0xc000d526e0) (0xc000237220) Stream removed, broadcasting: 3 I0401 13:45:40.962223 6 log.go:172] (0xc000d526e0) (0xc0016541e0) Stream removed, broadcasting: 5 Apr 1 13:45:40.962: INFO: Found all expected endpoints: [netserver-0] Apr 1 13:45:40.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.196:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1682 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 1 13:45:40.965: INFO: >>> kubeConfig: /root/.kube/config I0401 13:45:41.006939 6 log.go:172] (0xc0019146e0) (0xc001654500) Create stream I0401 13:45:41.006959 6 log.go:172] (0xc0019146e0) (0xc001654500) Stream added, broadcasting: 1 I0401 13:45:41.008677 6 log.go:172] (0xc0019146e0) Reply frame received for 1 I0401 13:45:41.008719 6 log.go:172] (0xc0019146e0) (0xc00222ebe0) Create stream I0401 13:45:41.008726 6 log.go:172] (0xc0019146e0) (0xc00222ebe0) Stream added, broadcasting: 3 I0401 13:45:41.009792 6 log.go:172] (0xc0019146e0) Reply frame received for 3 I0401 13:45:41.009825 6 log.go:172] (0xc0019146e0) (0xc000237720) Create stream I0401 13:45:41.009838 6 log.go:172] (0xc0019146e0) (0xc000237720) Stream added, broadcasting: 5 I0401 13:45:41.010568 6 log.go:172] (0xc0019146e0) Reply frame received for 5 I0401 13:45:41.076409 6 log.go:172] (0xc0019146e0) Data frame received for 3 I0401 13:45:41.076446 6 log.go:172] (0xc00222ebe0) (3) Data frame handling I0401 13:45:41.076456 6 log.go:172] (0xc00222ebe0) (3) Data frame sent I0401 13:45:41.076466 6 log.go:172] (0xc0019146e0) Data frame received for 3 I0401 13:45:41.076472 6 log.go:172] (0xc00222ebe0) (3) Data frame handling I0401 13:45:41.076496 6 log.go:172] (0xc0019146e0) Data frame received for 5 I0401 13:45:41.076503 6 log.go:172] (0xc000237720) (5) Data frame handling I0401 13:45:41.078390 6 log.go:172] (0xc0019146e0) Data frame received for 1 I0401 13:45:41.078412 6 log.go:172] (0xc001654500) (1) Data frame handling I0401 13:45:41.078429 6 log.go:172] (0xc001654500) (1) Data frame sent I0401 13:45:41.078445 6 log.go:172] (0xc0019146e0) (0xc001654500) Stream removed, broadcasting: 1 I0401 13:45:41.078523 6 log.go:172] (0xc0019146e0) (0xc001654500) Stream removed, broadcasting: 1 I0401 13:45:41.078536 6 log.go:172] (0xc0019146e0) (0xc00222ebe0) Stream removed, broadcasting: 3 I0401 13:45:41.078611 6 log.go:172] (0xc0019146e0) Go away received I0401 13:45:41.078709 6 log.go:172] (0xc0019146e0) (0xc000237720) Stream removed, broadcasting: 5 Apr 1 13:45:41.078: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:45:41.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1682" for this suite. Apr 1 13:46:03.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:46:03.216: INFO: namespace pod-network-test-1682 deletion completed in 22.133107517s • [SLOW TEST:44.612 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:46:03.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-k4cj STEP: Creating a pod to test atomic-volume-subpath Apr 1 13:46:03.294: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-k4cj" in namespace "subpath-2544" to be "success or failure" Apr 1 13:46:03.298: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785896ms Apr 1 13:46:05.324: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030006542s Apr 1 13:46:07.330: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 4.035202888s Apr 1 13:46:09.334: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 6.039500216s Apr 1 13:46:11.338: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 8.043215174s Apr 1 13:46:13.341: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 10.047008379s Apr 1 13:46:15.344: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 12.049884124s Apr 1 13:46:17.348: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 14.053824282s Apr 1 13:46:19.443: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 16.1486232s Apr 1 13:46:21.448: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 18.153217427s Apr 1 13:46:23.451: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 20.156643917s Apr 1 13:46:25.454: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 22.159896029s Apr 1 13:46:27.465: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Running", Reason="", readiness=true. Elapsed: 24.17065985s Apr 1 13:46:29.469: INFO: Pod "pod-subpath-test-downwardapi-k4cj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.174655814s STEP: Saw pod success Apr 1 13:46:29.469: INFO: Pod "pod-subpath-test-downwardapi-k4cj" satisfied condition "success or failure" Apr 1 13:46:29.472: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-k4cj container test-container-subpath-downwardapi-k4cj: STEP: delete the pod Apr 1 13:46:29.793: INFO: Waiting for pod pod-subpath-test-downwardapi-k4cj to disappear Apr 1 13:46:29.830: INFO: Pod pod-subpath-test-downwardapi-k4cj no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-k4cj Apr 1 13:46:29.830: INFO: Deleting pod "pod-subpath-test-downwardapi-k4cj" in namespace "subpath-2544" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:46:29.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2544" for this suite. Apr 1 13:46:35.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:46:36.017: INFO: namespace subpath-2544 deletion completed in 6.179882696s • [SLOW TEST:32.801 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:46:36.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4882 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 1 13:46:36.294: INFO: Found 0 stateful pods, waiting for 3 Apr 1 13:46:46.300: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:46:46.300: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:46:46.300: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 1 13:46:56.299: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:46:56.299: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:46:56.299: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:46:56.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4882 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:46:58.931: INFO: stderr: "I0401 13:46:58.804305 2322 log.go:172] (0xc000ac4420) (0xc0005b2a00) Create stream\nI0401 13:46:58.804337 2322 log.go:172] (0xc000ac4420) (0xc0005b2a00) Stream added, broadcasting: 1\nI0401 13:46:58.806860 2322 log.go:172] (0xc000ac4420) Reply frame received for 1\nI0401 13:46:58.806927 2322 log.go:172] (0xc000ac4420) (0xc00098e000) Create stream\nI0401 13:46:58.806956 2322 log.go:172] (0xc000ac4420) (0xc00098e000) Stream added, broadcasting: 3\nI0401 13:46:58.807957 2322 log.go:172] (0xc000ac4420) Reply frame received for 3\nI0401 13:46:58.807986 2322 log.go:172] (0xc000ac4420) (0xc00098e0a0) Create stream\nI0401 13:46:58.807996 2322 log.go:172] (0xc000ac4420) (0xc00098e0a0) Stream added, broadcasting: 5\nI0401 13:46:58.808956 2322 log.go:172] (0xc000ac4420) Reply frame received for 5\nI0401 13:46:58.896683 2322 log.go:172] (0xc000ac4420) Data frame received for 5\nI0401 13:46:58.896717 2322 log.go:172] (0xc00098e0a0) (5) Data frame handling\nI0401 13:46:58.896739 2322 log.go:172] (0xc00098e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:46:58.926565 2322 log.go:172] (0xc000ac4420) Data frame received for 3\nI0401 13:46:58.926583 2322 log.go:172] (0xc00098e000) (3) Data frame handling\nI0401 13:46:58.926591 2322 log.go:172] (0xc00098e000) (3) Data frame sent\nI0401 13:46:58.926598 2322 log.go:172] (0xc000ac4420) Data frame received for 3\nI0401 13:46:58.926622 2322 log.go:172] (0xc000ac4420) Data frame received for 5\nI0401 13:46:58.926649 2322 log.go:172] (0xc00098e0a0) (5) Data frame handling\nI0401 13:46:58.926692 2322 log.go:172] (0xc00098e000) (3) Data frame handling\nI0401 13:46:58.928139 2322 log.go:172] (0xc000ac4420) Data frame received for 1\nI0401 13:46:58.928151 2322 log.go:172] (0xc0005b2a00) (1) Data frame handling\nI0401 13:46:58.928157 2322 log.go:172] (0xc0005b2a00) (1) Data frame sent\nI0401 13:46:58.928180 2322 log.go:172] (0xc000ac4420) (0xc0005b2a00) Stream removed, broadcasting: 1\nI0401 13:46:58.928219 2322 log.go:172] (0xc000ac4420) Go away received\nI0401 13:46:58.928414 2322 log.go:172] (0xc000ac4420) (0xc0005b2a00) Stream removed, broadcasting: 1\nI0401 13:46:58.928426 2322 log.go:172] (0xc000ac4420) (0xc00098e000) Stream removed, broadcasting: 3\nI0401 13:46:58.928431 2322 log.go:172] (0xc000ac4420) (0xc00098e0a0) Stream removed, broadcasting: 5\n" Apr 1 13:46:58.931: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:46:58.931: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 1 13:47:08.963: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 1 13:47:19.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4882 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:47:19.219: INFO: stderr: "I0401 13:47:19.125576 2356 log.go:172] (0xc000a526e0) (0xc0003fe820) Create stream\nI0401 13:47:19.125616 2356 log.go:172] (0xc000a526e0) (0xc0003fe820) Stream added, broadcasting: 1\nI0401 13:47:19.127862 2356 log.go:172] (0xc000a526e0) Reply frame received for 1\nI0401 13:47:19.127918 2356 log.go:172] (0xc000a526e0) (0xc000a22000) Create stream\nI0401 13:47:19.128007 2356 log.go:172] (0xc000a526e0) (0xc000a22000) Stream added, broadcasting: 3\nI0401 13:47:19.129952 2356 log.go:172] (0xc000a526e0) Reply frame received for 3\nI0401 13:47:19.129981 2356 log.go:172] (0xc000a526e0) (0xc0003fe000) Create stream\nI0401 13:47:19.129995 2356 log.go:172] (0xc000a526e0) (0xc0003fe000) Stream added, broadcasting: 5\nI0401 13:47:19.130916 2356 log.go:172] (0xc000a526e0) Reply frame received for 5\nI0401 13:47:19.213060 2356 log.go:172] (0xc000a526e0) Data frame received for 5\nI0401 13:47:19.213324 2356 log.go:172] (0xc0003fe000) (5) Data frame handling\nI0401 13:47:19.213381 2356 log.go:172] (0xc0003fe000) (5) Data frame sent\nI0401 13:47:19.213404 2356 log.go:172] (0xc000a526e0) Data frame received for 5\nI0401 13:47:19.213420 2356 log.go:172] (0xc0003fe000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:47:19.213439 2356 log.go:172] (0xc000a526e0) Data frame received for 3\nI0401 13:47:19.213501 2356 log.go:172] (0xc000a22000) (3) Data frame handling\nI0401 13:47:19.213524 2356 log.go:172] (0xc000a22000) (3) Data frame sent\nI0401 13:47:19.213535 2356 log.go:172] (0xc000a526e0) Data frame received for 3\nI0401 13:47:19.213543 2356 log.go:172] (0xc000a22000) (3) Data frame handling\nI0401 13:47:19.214942 2356 log.go:172] (0xc000a526e0) Data frame received for 1\nI0401 13:47:19.214971 2356 log.go:172] (0xc0003fe820) (1) Data frame handling\nI0401 13:47:19.214985 2356 log.go:172] (0xc0003fe820) (1) Data frame sent\nI0401 13:47:19.215013 2356 log.go:172] (0xc000a526e0) (0xc0003fe820) Stream removed, broadcasting: 1\nI0401 13:47:19.215028 2356 log.go:172] (0xc000a526e0) Go away received\nI0401 13:47:19.215492 2356 log.go:172] (0xc000a526e0) (0xc0003fe820) Stream removed, broadcasting: 1\nI0401 13:47:19.215514 2356 log.go:172] (0xc000a526e0) (0xc000a22000) Stream removed, broadcasting: 3\nI0401 13:47:19.215525 2356 log.go:172] (0xc000a526e0) (0xc0003fe000) Stream removed, broadcasting: 5\n" Apr 1 13:47:19.219: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:47:19.219: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:47:29.238: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:47:29.239: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 13:47:29.239: INFO: Waiting for Pod statefulset-4882/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 13:47:29.239: INFO: Waiting for Pod statefulset-4882/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 13:47:39.247: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:47:39.247: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 13:47:39.247: INFO: Waiting for Pod statefulset-4882/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 13:47:49.247: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:47:49.247: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 1 13:47:59.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4882 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:47:59.499: INFO: stderr: "I0401 13:47:59.381804 2376 log.go:172] (0xc000116d10) (0xc0002b8820) Create stream\nI0401 13:47:59.381872 2376 log.go:172] (0xc000116d10) (0xc0002b8820) Stream added, broadcasting: 1\nI0401 13:47:59.384240 2376 log.go:172] (0xc000116d10) Reply frame received for 1\nI0401 13:47:59.384282 2376 log.go:172] (0xc000116d10) (0xc0002b88c0) Create stream\nI0401 13:47:59.384292 2376 log.go:172] (0xc000116d10) (0xc0002b88c0) Stream added, broadcasting: 3\nI0401 13:47:59.385292 2376 log.go:172] (0xc000116d10) Reply frame received for 3\nI0401 13:47:59.385327 2376 log.go:172] (0xc000116d10) (0xc000384320) Create stream\nI0401 13:47:59.385341 2376 log.go:172] (0xc000116d10) (0xc000384320) Stream added, broadcasting: 5\nI0401 13:47:59.386210 2376 log.go:172] (0xc000116d10) Reply frame received for 5\nI0401 13:47:59.460792 2376 log.go:172] (0xc000116d10) Data frame received for 5\nI0401 13:47:59.460837 2376 log.go:172] (0xc000384320) (5) Data frame handling\nI0401 13:47:59.460871 2376 log.go:172] (0xc000384320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:47:59.492284 2376 log.go:172] (0xc000116d10) Data frame received for 3\nI0401 13:47:59.492327 2376 log.go:172] (0xc0002b88c0) (3) Data frame handling\nI0401 13:47:59.492369 2376 log.go:172] (0xc0002b88c0) (3) Data frame sent\nI0401 13:47:59.492605 2376 log.go:172] (0xc000116d10) Data frame received for 3\nI0401 13:47:59.492635 2376 log.go:172] (0xc0002b88c0) (3) Data frame handling\nI0401 13:47:59.492874 2376 log.go:172] (0xc000116d10) Data frame received for 5\nI0401 13:47:59.492903 2376 log.go:172] (0xc000384320) (5) Data frame handling\nI0401 13:47:59.494904 2376 log.go:172] (0xc000116d10) Data frame received for 1\nI0401 13:47:59.494931 2376 log.go:172] (0xc0002b8820) (1) Data frame handling\nI0401 13:47:59.494959 2376 log.go:172] (0xc0002b8820) (1) Data frame sent\nI0401 13:47:59.494983 2376 log.go:172] (0xc000116d10) (0xc0002b8820) Stream removed, broadcasting: 1\nI0401 13:47:59.495023 2376 log.go:172] (0xc000116d10) Go away received\nI0401 13:47:59.495336 2376 log.go:172] (0xc000116d10) (0xc0002b8820) Stream removed, broadcasting: 1\nI0401 13:47:59.495358 2376 log.go:172] (0xc000116d10) (0xc0002b88c0) Stream removed, broadcasting: 3\nI0401 13:47:59.495370 2376 log.go:172] (0xc000116d10) (0xc000384320) Stream removed, broadcasting: 5\n" Apr 1 13:47:59.500: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:47:59.500: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:48:09.532: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 1 13:48:19.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4882 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:48:19.851: INFO: stderr: "I0401 13:48:19.754367 2397 log.go:172] (0xc00013cdc0) (0xc00054a820) Create stream\nI0401 13:48:19.754460 2397 log.go:172] (0xc00013cdc0) (0xc00054a820) Stream added, broadcasting: 1\nI0401 13:48:19.758494 2397 log.go:172] (0xc00013cdc0) Reply frame received for 1\nI0401 13:48:19.758529 2397 log.go:172] (0xc00013cdc0) (0xc00054a000) Create stream\nI0401 13:48:19.758540 2397 log.go:172] (0xc00013cdc0) (0xc00054a000) Stream added, broadcasting: 3\nI0401 13:48:19.759436 2397 log.go:172] (0xc00013cdc0) Reply frame received for 3\nI0401 13:48:19.759465 2397 log.go:172] (0xc00013cdc0) (0xc0005bc280) Create stream\nI0401 13:48:19.759473 2397 log.go:172] (0xc00013cdc0) (0xc0005bc280) Stream added, broadcasting: 5\nI0401 13:48:19.760377 2397 log.go:172] (0xc00013cdc0) Reply frame received for 5\nI0401 13:48:19.845013 2397 log.go:172] (0xc00013cdc0) Data frame received for 5\nI0401 13:48:19.845053 2397 log.go:172] (0xc0005bc280) (5) Data frame handling\nI0401 13:48:19.845066 2397 log.go:172] (0xc0005bc280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:48:19.845077 2397 log.go:172] (0xc00013cdc0) Data frame received for 5\nI0401 13:48:19.845313 2397 log.go:172] (0xc0005bc280) (5) Data frame handling\nI0401 13:48:19.845344 2397 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0401 13:48:19.845358 2397 log.go:172] (0xc00054a000) (3) Data frame handling\nI0401 13:48:19.845377 2397 log.go:172] (0xc00054a000) (3) Data frame sent\nI0401 13:48:19.845391 2397 log.go:172] (0xc00013cdc0) Data frame received for 3\nI0401 13:48:19.845401 2397 log.go:172] (0xc00054a000) (3) Data frame handling\nI0401 13:48:19.846853 2397 log.go:172] (0xc00013cdc0) Data frame received for 1\nI0401 13:48:19.846890 2397 log.go:172] (0xc00054a820) (1) Data frame handling\nI0401 13:48:19.846921 2397 log.go:172] (0xc00054a820) (1) Data frame sent\nI0401 13:48:19.846945 2397 log.go:172] (0xc00013cdc0) (0xc00054a820) Stream removed, broadcasting: 1\nI0401 13:48:19.846976 2397 log.go:172] (0xc00013cdc0) Go away received\nI0401 13:48:19.847254 2397 log.go:172] (0xc00013cdc0) (0xc00054a820) Stream removed, broadcasting: 1\nI0401 13:48:19.847282 2397 log.go:172] (0xc00013cdc0) (0xc00054a000) Stream removed, broadcasting: 3\nI0401 13:48:19.847289 2397 log.go:172] (0xc00013cdc0) (0xc0005bc280) Stream removed, broadcasting: 5\n" Apr 1 13:48:19.851: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:48:19.851: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:48:29.872: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:48:29.872: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 1 13:48:29.872: INFO: Waiting for Pod statefulset-4882/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 1 13:48:29.872: INFO: Waiting for Pod statefulset-4882/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 1 13:48:39.880: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:48:39.880: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 1 13:48:39.880: INFO: Waiting for Pod statefulset-4882/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 1 13:48:49.879: INFO: Waiting for StatefulSet statefulset-4882/ss2 to complete update Apr 1 13:48:49.880: INFO: Waiting for Pod statefulset-4882/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 1 13:48:59.880: INFO: Deleting all statefulset in ns statefulset-4882 Apr 1 13:48:59.883: INFO: Scaling statefulset ss2 to 0 Apr 1 13:49:39.900: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:49:39.904: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:49:39.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4882" for this suite. Apr 1 13:49:45.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:49:46.028: INFO: namespace statefulset-4882 deletion completed in 6.106186834s • [SLOW TEST:190.011 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:49:46.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 1 13:49:46.134: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 1 13:49:51.139: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:49:52.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3896" for this suite. Apr 1 13:49:58.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:49:58.353: INFO: namespace replication-controller-3896 deletion completed in 6.191665537s • [SLOW TEST:12.324 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:49:58.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 13:50:04.508: INFO: DNS probes using dns-test-be5d9b04-314a-4e83-ae43-d1846ebaea22 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 13:50:10.599: INFO: File wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:10.601: INFO: File jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:10.601: INFO: Lookups using dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d failed for: [wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local] Apr 1 13:50:15.606: INFO: File wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:15.609: INFO: File jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:15.609: INFO: Lookups using dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d failed for: [wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local] Apr 1 13:50:20.607: INFO: File wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:20.611: INFO: File jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:20.611: INFO: Lookups using dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d failed for: [wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local] Apr 1 13:50:25.606: INFO: File wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:25.610: INFO: File jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:25.610: INFO: Lookups using dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d failed for: [wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local] Apr 1 13:50:30.606: INFO: File wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:30.609: INFO: File jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local from pod dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 1 13:50:30.609: INFO: Lookups using dns-7806/dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d failed for: [wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local] Apr 1 13:50:35.610: INFO: DNS probes using dns-test-9d337d42-ddd6-48f0-a9e0-a95abb6c104d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7806.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7806.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 13:50:42.128: INFO: DNS probes using dns-test-53df1c07-f244-43a5-9173-194088a3b1d9 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:50:42.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7806" for this suite. Apr 1 13:50:48.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:50:48.670: INFO: namespace dns-7806 deletion completed in 6.239723049s • [SLOW TEST:50.317 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:50:48.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 1 13:50:48.754: INFO: Waiting up to 5m0s for pod "pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f" in namespace "emptydir-2437" to be "success or failure" Apr 1 13:50:48.757: INFO: Pod "pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701657ms Apr 1 13:50:50.981: INFO: Pod "pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22672038s Apr 1 13:50:52.985: INFO: Pod "pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230713826s STEP: Saw pod success Apr 1 13:50:52.985: INFO: Pod "pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f" satisfied condition "success or failure" Apr 1 13:50:52.988: INFO: Trying to get logs from node iruya-worker2 pod pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f container test-container: STEP: delete the pod Apr 1 13:50:53.055: INFO: Waiting for pod pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f to disappear Apr 1 13:50:53.071: INFO: Pod pod-225bcab6-ae2a-48aa-8b04-e11dbf117f4f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:50:53.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2437" for this suite. Apr 1 13:50:59.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:50:59.188: INFO: namespace emptydir-2437 deletion completed in 6.111513329s • [SLOW TEST:10.518 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:50:59.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 13:50:59.300: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0" in namespace "downward-api-113" to be "success or failure" Apr 1 13:50:59.305: INFO: Pod "downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130792ms Apr 1 13:51:01.310: INFO: Pod "downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009563076s Apr 1 13:51:03.313: INFO: Pod "downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013320468s STEP: Saw pod success Apr 1 13:51:03.313: INFO: Pod "downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0" satisfied condition "success or failure" Apr 1 13:51:03.316: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0 container client-container: STEP: delete the pod Apr 1 13:51:03.330: INFO: Waiting for pod downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0 to disappear Apr 1 13:51:03.340: INFO: Pod downwardapi-volume-ca47d212-4a59-421c-b95a-e42c59283de0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:51:03.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-113" for this suite. Apr 1 13:51:09.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:51:09.434: INFO: namespace downward-api-113 deletion completed in 6.090109309s • [SLOW TEST:10.246 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:51:09.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 1 13:51:14.046: INFO: Successfully updated pod "pod-update-5b98a611-fb75-4b99-b1d0-a6033fb12a53" STEP: verifying the updated pod is in kubernetes Apr 1 13:51:14.053: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:51:14.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9040" for this suite. Apr 1 13:51:36.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:51:36.140: INFO: namespace pods-9040 deletion completed in 22.083797344s • [SLOW TEST:26.706 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:51:36.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-39dfcdc2-a1cb-4ada-aba5-c49774aa0c2c STEP: Creating a pod to test consume secrets Apr 1 13:51:36.220: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b" in namespace "projected-6133" to be "success or failure" Apr 1 13:51:36.238: INFO: Pod "pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.362254ms Apr 1 13:51:38.242: INFO: Pod "pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022100548s Apr 1 13:51:40.247: INFO: Pod "pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026789327s STEP: Saw pod success Apr 1 13:51:40.247: INFO: Pod "pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b" satisfied condition "success or failure" Apr 1 13:51:40.250: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b container projected-secret-volume-test: STEP: delete the pod Apr 1 13:51:40.288: INFO: Waiting for pod pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b to disappear Apr 1 13:51:40.316: INFO: Pod pod-projected-secrets-4848ea68-fd38-4242-b444-522932def23b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:51:40.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6133" for this suite. Apr 1 13:51:46.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:51:46.414: INFO: namespace projected-6133 deletion completed in 6.094633364s • [SLOW TEST:10.273 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:51:46.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 1 13:51:46.487: INFO: PodSpec: initContainers in spec.initContainers Apr 1 13:52:35.343: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0365520b-d8d3-447b-a645-8d44e554b3a9", GenerateName:"", Namespace:"init-container-387", SelfLink:"/api/v1/namespaces/init-container-387/pods/pod-init-0365520b-d8d3-447b-a645-8d44e554b3a9", UID:"2a645cde-324a-41dd-ad79-184c6bd4ea7d", ResourceVersion:"3040836", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721345906, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"487653611"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rx7zz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022d2680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rx7zz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rx7zz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rx7zz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003301ec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00212b200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003301f50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003301f70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003301f78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003301f7c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721345906, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721345906, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721345906, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721345906, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.206", StartTime:(*v1.Time)(0xc002428a00), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bbc850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bbc8c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e2f5877adb6de9d9388801616fe8a5d2f8ba3c86ad552cd2f0416cbcf301224b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002428a40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002428a20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:52:35.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-387" for this suite. Apr 1 13:52:57.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:52:57.565: INFO: namespace init-container-387 deletion completed in 22.160630687s • [SLOW TEST:71.151 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:52:57.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 1 13:52:57.639: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-457" to be "success or failure" Apr 1 13:52:57.643: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301134ms Apr 1 13:52:59.647: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008077148s Apr 1 13:53:01.651: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.012040525s Apr 1 13:53:03.656: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016397774s STEP: Saw pod success Apr 1 13:53:03.656: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 1 13:53:03.659: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 1 13:53:03.690: INFO: Waiting for pod pod-host-path-test to disappear Apr 1 13:53:03.704: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:53:03.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-457" for this suite. Apr 1 13:53:09.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:53:09.822: INFO: namespace hostpath-457 deletion completed in 6.113635004s • [SLOW TEST:12.257 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:53:09.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0401 13:53:19.905870 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 13:53:19.905: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:53:19.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3229" for this suite. Apr 1 13:53:25.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:53:25.996: INFO: namespace gc-3229 deletion completed in 6.087154254s • [SLOW TEST:16.173 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:53:25.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4291.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 13:53:32.123: INFO: DNS probes using dns-4291/dns-test-c53a6744-aff4-4657-8613-1778b07e4f4d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:53:32.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4291" for this suite. Apr 1 13:53:38.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:53:38.290: INFO: namespace dns-4291 deletion completed in 6.106655017s • [SLOW TEST:12.294 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:53:38.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-d8a07f9a-7c2c-4a0c-81b5-8928770f1206 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:53:44.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8007" for this suite. Apr 1 13:54:06.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:54:06.543: INFO: namespace configmap-8007 deletion completed in 22.113393932s • [SLOW TEST:28.253 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:54:06.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1143 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1143 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1143 Apr 1 13:54:06.633: INFO: Found 0 stateful pods, waiting for 1 Apr 1 13:54:16.637: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 1 13:54:16.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:54:16.917: INFO: stderr: "I0401 13:54:16.764767 2418 log.go:172] (0xc0009ae630) (0xc0007debe0) Create stream\nI0401 13:54:16.764825 2418 log.go:172] (0xc0009ae630) (0xc0007debe0) Stream added, broadcasting: 1\nI0401 13:54:16.767568 2418 log.go:172] (0xc0009ae630) Reply frame received for 1\nI0401 13:54:16.767601 2418 log.go:172] (0xc0009ae630) (0xc000780000) Create stream\nI0401 13:54:16.767610 2418 log.go:172] (0xc0009ae630) (0xc000780000) Stream added, broadcasting: 3\nI0401 13:54:16.768320 2418 log.go:172] (0xc0009ae630) Reply frame received for 3\nI0401 13:54:16.768365 2418 log.go:172] (0xc0009ae630) (0xc0007de460) Create stream\nI0401 13:54:16.768386 2418 log.go:172] (0xc0009ae630) (0xc0007de460) Stream added, broadcasting: 5\nI0401 13:54:16.769260 2418 log.go:172] (0xc0009ae630) Reply frame received for 5\nI0401 13:54:16.870372 2418 log.go:172] (0xc0009ae630) Data frame received for 5\nI0401 13:54:16.870417 2418 log.go:172] (0xc0007de460) (5) Data frame handling\nI0401 13:54:16.870449 2418 log.go:172] (0xc0007de460) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:54:16.910851 2418 log.go:172] (0xc0009ae630) Data frame received for 3\nI0401 13:54:16.910896 2418 log.go:172] (0xc000780000) (3) Data frame handling\nI0401 13:54:16.910925 2418 log.go:172] (0xc000780000) (3) Data frame sent\nI0401 13:54:16.910945 2418 log.go:172] (0xc0009ae630) Data frame received for 3\nI0401 13:54:16.910970 2418 log.go:172] (0xc000780000) (3) Data frame handling\nI0401 13:54:16.911180 2418 log.go:172] (0xc0009ae630) Data frame received for 5\nI0401 13:54:16.911205 2418 log.go:172] (0xc0007de460) (5) Data frame handling\nI0401 13:54:16.912861 2418 log.go:172] (0xc0009ae630) Data frame received for 1\nI0401 13:54:16.912881 2418 log.go:172] (0xc0007debe0) (1) Data frame handling\nI0401 13:54:16.912903 2418 log.go:172] (0xc0007debe0) (1) Data frame sent\nI0401 13:54:16.912952 2418 log.go:172] (0xc0009ae630) (0xc0007debe0) Stream removed, broadcasting: 1\nI0401 13:54:16.913076 2418 log.go:172] (0xc0009ae630) Go away received\nI0401 13:54:16.913601 2418 log.go:172] (0xc0009ae630) (0xc0007debe0) Stream removed, broadcasting: 1\nI0401 13:54:16.913624 2418 log.go:172] (0xc0009ae630) (0xc000780000) Stream removed, broadcasting: 3\nI0401 13:54:16.913635 2418 log.go:172] (0xc0009ae630) (0xc0007de460) Stream removed, broadcasting: 5\n" Apr 1 13:54:16.918: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:54:16.918: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:54:16.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 1 13:54:26.927: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:54:26.927: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:54:26.951: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999472s Apr 1 13:54:27.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.986879434s Apr 1 13:54:28.960: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.982377378s Apr 1 13:54:29.964: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977579457s Apr 1 13:54:30.969: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.973677534s Apr 1 13:54:31.973: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.968889671s Apr 1 13:54:32.978: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.964330825s Apr 1 13:54:33.983: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.95922252s Apr 1 13:54:34.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954278415s Apr 1 13:54:35.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 949.30633ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1143 Apr 1 13:54:36.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:54:37.241: INFO: stderr: "I0401 13:54:37.143514 2438 log.go:172] (0xc0006f0a50) (0xc0006ea6e0) Create stream\nI0401 13:54:37.143572 2438 log.go:172] (0xc0006f0a50) (0xc0006ea6e0) Stream added, broadcasting: 1\nI0401 13:54:37.147532 2438 log.go:172] (0xc0006f0a50) Reply frame received for 1\nI0401 13:54:37.147594 2438 log.go:172] (0xc0006f0a50) (0xc0001101e0) Create stream\nI0401 13:54:37.147610 2438 log.go:172] (0xc0006f0a50) (0xc0001101e0) Stream added, broadcasting: 3\nI0401 13:54:37.148955 2438 log.go:172] (0xc0006f0a50) Reply frame received for 3\nI0401 13:54:37.148993 2438 log.go:172] (0xc0006f0a50) (0xc000644000) Create stream\nI0401 13:54:37.149004 2438 log.go:172] (0xc0006f0a50) (0xc000644000) Stream added, broadcasting: 5\nI0401 13:54:37.150340 2438 log.go:172] (0xc0006f0a50) Reply frame received for 5\nI0401 13:54:37.234846 2438 log.go:172] (0xc0006f0a50) Data frame received for 5\nI0401 13:54:37.234898 2438 log.go:172] (0xc000644000) (5) Data frame handling\nI0401 13:54:37.234923 2438 log.go:172] (0xc000644000) (5) Data frame sent\nI0401 13:54:37.234939 2438 log.go:172] (0xc0006f0a50) Data frame received for 5\nI0401 13:54:37.234954 2438 log.go:172] (0xc000644000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:54:37.234987 2438 log.go:172] (0xc0006f0a50) Data frame received for 3\nI0401 13:54:37.235012 2438 log.go:172] (0xc0001101e0) (3) Data frame handling\nI0401 13:54:37.235037 2438 log.go:172] (0xc0001101e0) (3) Data frame sent\nI0401 13:54:37.235058 2438 log.go:172] (0xc0006f0a50) Data frame received for 3\nI0401 13:54:37.235072 2438 log.go:172] (0xc0001101e0) (3) Data frame handling\nI0401 13:54:37.236819 2438 log.go:172] (0xc0006f0a50) Data frame received for 1\nI0401 13:54:37.236834 2438 log.go:172] (0xc0006ea6e0) (1) Data frame handling\nI0401 13:54:37.236851 2438 log.go:172] (0xc0006ea6e0) (1) Data frame sent\nI0401 13:54:37.236860 2438 log.go:172] (0xc0006f0a50) (0xc0006ea6e0) Stream removed, broadcasting: 1\nI0401 13:54:37.237028 2438 log.go:172] (0xc0006f0a50) Go away received\nI0401 13:54:37.237225 2438 log.go:172] (0xc0006f0a50) (0xc0006ea6e0) Stream removed, broadcasting: 1\nI0401 13:54:37.237242 2438 log.go:172] (0xc0006f0a50) (0xc0001101e0) Stream removed, broadcasting: 3\nI0401 13:54:37.237251 2438 log.go:172] (0xc0006f0a50) (0xc000644000) Stream removed, broadcasting: 5\n" Apr 1 13:54:37.241: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:54:37.241: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:54:37.245: INFO: Found 1 stateful pods, waiting for 3 Apr 1 13:54:47.250: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:54:47.251: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 13:54:47.251: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 1 13:54:47.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:54:47.444: INFO: stderr: "I0401 13:54:47.383441 2462 log.go:172] (0xc000a3e630) (0xc0006fe960) Create stream\nI0401 13:54:47.383483 2462 log.go:172] (0xc000a3e630) (0xc0006fe960) Stream added, broadcasting: 1\nI0401 13:54:47.390754 2462 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0401 13:54:47.390800 2462 log.go:172] (0xc000a3e630) (0xc0006fea00) Create stream\nI0401 13:54:47.390815 2462 log.go:172] (0xc000a3e630) (0xc0006fea00) Stream added, broadcasting: 3\nI0401 13:54:47.391514 2462 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0401 13:54:47.391553 2462 log.go:172] (0xc000a3e630) (0xc000a3c000) Create stream\nI0401 13:54:47.391573 2462 log.go:172] (0xc000a3e630) (0xc000a3c000) Stream added, broadcasting: 5\nI0401 13:54:47.392468 2462 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0401 13:54:47.439061 2462 log.go:172] (0xc000a3e630) Data frame received for 5\nI0401 13:54:47.439107 2462 log.go:172] (0xc000a3e630) Data frame received for 3\nI0401 13:54:47.439155 2462 log.go:172] (0xc0006fea00) (3) Data frame handling\nI0401 13:54:47.439193 2462 log.go:172] (0xc0006fea00) (3) Data frame sent\nI0401 13:54:47.439205 2462 log.go:172] (0xc000a3e630) Data frame received for 3\nI0401 13:54:47.439218 2462 log.go:172] (0xc0006fea00) (3) Data frame handling\nI0401 13:54:47.439263 2462 log.go:172] (0xc000a3c000) (5) Data frame handling\nI0401 13:54:47.439277 2462 log.go:172] (0xc000a3c000) (5) Data frame sent\nI0401 13:54:47.439301 2462 log.go:172] (0xc000a3e630) Data frame received for 5\nI0401 13:54:47.439319 2462 log.go:172] (0xc000a3c000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:54:47.440635 2462 log.go:172] (0xc000a3e630) Data frame received for 1\nI0401 13:54:47.440652 2462 log.go:172] (0xc0006fe960) (1) Data frame handling\nI0401 13:54:47.440661 2462 log.go:172] (0xc0006fe960) (1) Data frame sent\nI0401 13:54:47.440681 2462 log.go:172] (0xc000a3e630) (0xc0006fe960) Stream removed, broadcasting: 1\nI0401 13:54:47.440836 2462 log.go:172] (0xc000a3e630) Go away received\nI0401 13:54:47.440984 2462 log.go:172] (0xc000a3e630) (0xc0006fe960) Stream removed, broadcasting: 1\nI0401 13:54:47.441000 2462 log.go:172] (0xc000a3e630) (0xc0006fea00) Stream removed, broadcasting: 3\nI0401 13:54:47.441024 2462 log.go:172] (0xc000a3e630) (0xc000a3c000) Stream removed, broadcasting: 5\n" Apr 1 13:54:47.444: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:54:47.444: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:54:47.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:54:47.694: INFO: stderr: "I0401 13:54:47.577281 2483 log.go:172] (0xc0009c6630) (0xc000410aa0) Create stream\nI0401 13:54:47.577341 2483 log.go:172] (0xc0009c6630) (0xc000410aa0) Stream added, broadcasting: 1\nI0401 13:54:47.585398 2483 log.go:172] (0xc0009c6630) Reply frame received for 1\nI0401 13:54:47.585548 2483 log.go:172] (0xc0009c6630) (0xc000976000) Create stream\nI0401 13:54:47.585631 2483 log.go:172] (0xc0009c6630) (0xc000976000) Stream added, broadcasting: 3\nI0401 13:54:47.587204 2483 log.go:172] (0xc0009c6630) Reply frame received for 3\nI0401 13:54:47.587239 2483 log.go:172] (0xc0009c6630) (0xc0009760a0) Create stream\nI0401 13:54:47.587255 2483 log.go:172] (0xc0009c6630) (0xc0009760a0) Stream added, broadcasting: 5\nI0401 13:54:47.587978 2483 log.go:172] (0xc0009c6630) Reply frame received for 5\nI0401 13:54:47.643972 2483 log.go:172] (0xc0009c6630) Data frame received for 5\nI0401 13:54:47.643998 2483 log.go:172] (0xc0009760a0) (5) Data frame handling\nI0401 13:54:47.644021 2483 log.go:172] (0xc0009760a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:54:47.687641 2483 log.go:172] (0xc0009c6630) Data frame received for 5\nI0401 13:54:47.687697 2483 log.go:172] (0xc0009760a0) (5) Data frame handling\nI0401 13:54:47.687728 2483 log.go:172] (0xc0009c6630) Data frame received for 3\nI0401 13:54:47.687746 2483 log.go:172] (0xc000976000) (3) Data frame handling\nI0401 13:54:47.687764 2483 log.go:172] (0xc000976000) (3) Data frame sent\nI0401 13:54:47.687786 2483 log.go:172] (0xc0009c6630) Data frame received for 3\nI0401 13:54:47.687801 2483 log.go:172] (0xc000976000) (3) Data frame handling\nI0401 13:54:47.690367 2483 log.go:172] (0xc0009c6630) Data frame received for 1\nI0401 13:54:47.690382 2483 log.go:172] (0xc000410aa0) (1) Data frame handling\nI0401 13:54:47.690392 2483 log.go:172] (0xc000410aa0) (1) Data frame sent\nI0401 13:54:47.690408 2483 log.go:172] (0xc0009c6630) (0xc000410aa0) Stream removed, broadcasting: 1\nI0401 13:54:47.690651 2483 log.go:172] (0xc0009c6630) Go away received\nI0401 13:54:47.690718 2483 log.go:172] (0xc0009c6630) (0xc000410aa0) Stream removed, broadcasting: 1\nI0401 13:54:47.690752 2483 log.go:172] (0xc0009c6630) (0xc000976000) Stream removed, broadcasting: 3\nI0401 13:54:47.690776 2483 log.go:172] (0xc0009c6630) (0xc0009760a0) Stream removed, broadcasting: 5\n" Apr 1 13:54:47.694: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:54:47.694: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:54:47.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 1 13:54:47.945: INFO: stderr: "I0401 13:54:47.819038 2503 log.go:172] (0xc000368420) (0xc0006068c0) Create stream\nI0401 13:54:47.819093 2503 log.go:172] (0xc000368420) (0xc0006068c0) Stream added, broadcasting: 1\nI0401 13:54:47.822102 2503 log.go:172] (0xc000368420) Reply frame received for 1\nI0401 13:54:47.822145 2503 log.go:172] (0xc000368420) (0xc000914000) Create stream\nI0401 13:54:47.822161 2503 log.go:172] (0xc000368420) (0xc000914000) Stream added, broadcasting: 3\nI0401 13:54:47.823311 2503 log.go:172] (0xc000368420) Reply frame received for 3\nI0401 13:54:47.823346 2503 log.go:172] (0xc000368420) (0xc000606960) Create stream\nI0401 13:54:47.823370 2503 log.go:172] (0xc000368420) (0xc000606960) Stream added, broadcasting: 5\nI0401 13:54:47.824476 2503 log.go:172] (0xc000368420) Reply frame received for 5\nI0401 13:54:47.892394 2503 log.go:172] (0xc000368420) Data frame received for 5\nI0401 13:54:47.892442 2503 log.go:172] (0xc000606960) (5) Data frame handling\nI0401 13:54:47.892479 2503 log.go:172] (0xc000606960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0401 13:54:47.934569 2503 log.go:172] (0xc000368420) Data frame received for 5\nI0401 13:54:47.934619 2503 log.go:172] (0xc000606960) (5) Data frame handling\nI0401 13:54:47.934653 2503 log.go:172] (0xc000368420) Data frame received for 3\nI0401 13:54:47.934686 2503 log.go:172] (0xc000914000) (3) Data frame handling\nI0401 13:54:47.934705 2503 log.go:172] (0xc000914000) (3) Data frame sent\nI0401 13:54:47.934718 2503 log.go:172] (0xc000368420) Data frame received for 3\nI0401 13:54:47.934726 2503 log.go:172] (0xc000914000) (3) Data frame handling\nI0401 13:54:47.940999 2503 log.go:172] (0xc000368420) Data frame received for 1\nI0401 13:54:47.941030 2503 log.go:172] (0xc0006068c0) (1) Data frame handling\nI0401 13:54:47.941048 2503 log.go:172] (0xc0006068c0) (1) Data frame sent\nI0401 13:54:47.941067 2503 log.go:172] (0xc000368420) (0xc0006068c0) Stream removed, broadcasting: 1\nI0401 13:54:47.941261 2503 log.go:172] (0xc000368420) Go away received\nI0401 13:54:47.941606 2503 log.go:172] (0xc000368420) (0xc0006068c0) Stream removed, broadcasting: 1\nI0401 13:54:47.941626 2503 log.go:172] (0xc000368420) (0xc000914000) Stream removed, broadcasting: 3\nI0401 13:54:47.941636 2503 log.go:172] (0xc000368420) (0xc000606960) Stream removed, broadcasting: 5\n" Apr 1 13:54:47.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 1 13:54:47.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 1 13:54:47.945: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:54:47.948: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 1 13:54:57.957: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:54:57.957: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:54:57.957: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 1 13:54:57.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999585s Apr 1 13:54:58.975: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994612244s Apr 1 13:54:59.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989082051s Apr 1 13:55:00.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984030209s Apr 1 13:55:01.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978694422s Apr 1 13:55:02.996: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973213082s Apr 1 13:55:04.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967650354s Apr 1 13:55:05.005: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963513534s Apr 1 13:55:06.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958695949s Apr 1 13:55:07.016: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.162621ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1143 Apr 1 13:55:08.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:55:08.230: INFO: stderr: "I0401 13:55:08.168744 2523 log.go:172] (0xc00093c420) (0xc0003206e0) Create stream\nI0401 13:55:08.168802 2523 log.go:172] (0xc00093c420) (0xc0003206e0) Stream added, broadcasting: 1\nI0401 13:55:08.171189 2523 log.go:172] (0xc00093c420) Reply frame received for 1\nI0401 13:55:08.171240 2523 log.go:172] (0xc00093c420) (0xc00088c000) Create stream\nI0401 13:55:08.171256 2523 log.go:172] (0xc00093c420) (0xc00088c000) Stream added, broadcasting: 3\nI0401 13:55:08.172306 2523 log.go:172] (0xc00093c420) Reply frame received for 3\nI0401 13:55:08.172350 2523 log.go:172] (0xc00093c420) (0xc00060a460) Create stream\nI0401 13:55:08.172368 2523 log.go:172] (0xc00093c420) (0xc00060a460) Stream added, broadcasting: 5\nI0401 13:55:08.173717 2523 log.go:172] (0xc00093c420) Reply frame received for 5\nI0401 13:55:08.225374 2523 log.go:172] (0xc00093c420) Data frame received for 5\nI0401 13:55:08.225411 2523 log.go:172] (0xc00060a460) (5) Data frame handling\nI0401 13:55:08.225426 2523 log.go:172] (0xc00060a460) (5) Data frame sent\nI0401 13:55:08.225436 2523 log.go:172] (0xc00093c420) Data frame received for 5\nI0401 13:55:08.225445 2523 log.go:172] (0xc00060a460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:55:08.225473 2523 log.go:172] (0xc00093c420) Data frame received for 3\nI0401 13:55:08.225483 2523 log.go:172] (0xc00088c000) (3) Data frame handling\nI0401 13:55:08.225500 2523 log.go:172] (0xc00088c000) (3) Data frame sent\nI0401 13:55:08.225511 2523 log.go:172] (0xc00093c420) Data frame received for 3\nI0401 13:55:08.225521 2523 log.go:172] (0xc00088c000) (3) Data frame handling\nI0401 13:55:08.226980 2523 log.go:172] (0xc00093c420) Data frame received for 1\nI0401 13:55:08.227021 2523 log.go:172] (0xc0003206e0) (1) Data frame handling\nI0401 13:55:08.227038 2523 log.go:172] (0xc0003206e0) (1) Data frame sent\nI0401 13:55:08.227074 2523 log.go:172] (0xc00093c420) (0xc0003206e0) Stream removed, broadcasting: 1\nI0401 13:55:08.227116 2523 log.go:172] (0xc00093c420) Go away received\nI0401 13:55:08.227451 2523 log.go:172] (0xc00093c420) (0xc0003206e0) Stream removed, broadcasting: 1\nI0401 13:55:08.227745 2523 log.go:172] (0xc00093c420) (0xc00088c000) Stream removed, broadcasting: 3\nI0401 13:55:08.227772 2523 log.go:172] (0xc00093c420) (0xc00060a460) Stream removed, broadcasting: 5\n" Apr 1 13:55:08.231: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:55:08.231: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:55:08.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:55:08.413: INFO: stderr: "I0401 13:55:08.348123 2544 log.go:172] (0xc00098a420) (0xc00096e780) Create stream\nI0401 13:55:08.348180 2544 log.go:172] (0xc00098a420) (0xc00096e780) Stream added, broadcasting: 1\nI0401 13:55:08.351303 2544 log.go:172] (0xc00098a420) Reply frame received for 1\nI0401 13:55:08.351334 2544 log.go:172] (0xc00098a420) (0xc000316140) Create stream\nI0401 13:55:08.351340 2544 log.go:172] (0xc00098a420) (0xc000316140) Stream added, broadcasting: 3\nI0401 13:55:08.352504 2544 log.go:172] (0xc00098a420) Reply frame received for 3\nI0401 13:55:08.352567 2544 log.go:172] (0xc00098a420) (0xc00096e820) Create stream\nI0401 13:55:08.352589 2544 log.go:172] (0xc00098a420) (0xc00096e820) Stream added, broadcasting: 5\nI0401 13:55:08.355046 2544 log.go:172] (0xc00098a420) Reply frame received for 5\nI0401 13:55:08.406779 2544 log.go:172] (0xc00098a420) Data frame received for 3\nI0401 13:55:08.406833 2544 log.go:172] (0xc000316140) (3) Data frame handling\nI0401 13:55:08.406854 2544 log.go:172] (0xc000316140) (3) Data frame sent\nI0401 13:55:08.406869 2544 log.go:172] (0xc00098a420) Data frame received for 3\nI0401 13:55:08.406880 2544 log.go:172] (0xc000316140) (3) Data frame handling\nI0401 13:55:08.406911 2544 log.go:172] (0xc00098a420) Data frame received for 5\nI0401 13:55:08.406920 2544 log.go:172] (0xc00096e820) (5) Data frame handling\nI0401 13:55:08.406929 2544 log.go:172] (0xc00096e820) (5) Data frame sent\nI0401 13:55:08.406938 2544 log.go:172] (0xc00098a420) Data frame received for 5\nI0401 13:55:08.406946 2544 log.go:172] (0xc00096e820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:55:08.408392 2544 log.go:172] (0xc00098a420) Data frame received for 1\nI0401 13:55:08.408411 2544 log.go:172] (0xc00096e780) (1) Data frame handling\nI0401 13:55:08.408421 2544 log.go:172] (0xc00096e780) (1) Data frame sent\nI0401 13:55:08.408435 2544 log.go:172] (0xc00098a420) (0xc00096e780) Stream removed, broadcasting: 1\nI0401 13:55:08.408457 2544 log.go:172] (0xc00098a420) Go away received\nI0401 13:55:08.408849 2544 log.go:172] (0xc00098a420) (0xc00096e780) Stream removed, broadcasting: 1\nI0401 13:55:08.408872 2544 log.go:172] (0xc00098a420) (0xc000316140) Stream removed, broadcasting: 3\nI0401 13:55:08.408880 2544 log.go:172] (0xc00098a420) (0xc00096e820) Stream removed, broadcasting: 5\n" Apr 1 13:55:08.413: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:55:08.413: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:55:08.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1143 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 1 13:55:08.626: INFO: stderr: "I0401 13:55:08.545053 2564 log.go:172] (0xc0005b2420) (0xc000382500) Create stream\nI0401 13:55:08.545107 2564 log.go:172] (0xc0005b2420) (0xc000382500) Stream added, broadcasting: 1\nI0401 13:55:08.548463 2564 log.go:172] (0xc0005b2420) Reply frame received for 1\nI0401 13:55:08.548530 2564 log.go:172] (0xc0005b2420) (0xc00080e500) Create stream\nI0401 13:55:08.548571 2564 log.go:172] (0xc0005b2420) (0xc00080e500) Stream added, broadcasting: 3\nI0401 13:55:08.549836 2564 log.go:172] (0xc0005b2420) Reply frame received for 3\nI0401 13:55:08.549889 2564 log.go:172] (0xc0005b2420) (0xc00087c000) Create stream\nI0401 13:55:08.549907 2564 log.go:172] (0xc0005b2420) (0xc00087c000) Stream added, broadcasting: 5\nI0401 13:55:08.550810 2564 log.go:172] (0xc0005b2420) Reply frame received for 5\nI0401 13:55:08.612686 2564 log.go:172] (0xc0005b2420) Data frame received for 5\nI0401 13:55:08.612708 2564 log.go:172] (0xc00087c000) (5) Data frame handling\nI0401 13:55:08.612730 2564 log.go:172] (0xc00087c000) (5) Data frame sent\nI0401 13:55:08.612735 2564 log.go:172] (0xc0005b2420) Data frame received for 5\nI0401 13:55:08.612741 2564 log.go:172] (0xc00087c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0401 13:55:08.612779 2564 log.go:172] (0xc0005b2420) Data frame received for 3\nI0401 13:55:08.612816 2564 log.go:172] (0xc00080e500) (3) Data frame handling\nI0401 13:55:08.612846 2564 log.go:172] (0xc00080e500) (3) Data frame sent\nI0401 13:55:08.613850 2564 log.go:172] (0xc0005b2420) Data frame received for 3\nI0401 13:55:08.613882 2564 log.go:172] (0xc00080e500) (3) Data frame handling\nI0401 13:55:08.615223 2564 log.go:172] (0xc0005b2420) Data frame received for 1\nI0401 13:55:08.615240 2564 log.go:172] (0xc000382500) (1) Data frame handling\nI0401 13:55:08.615247 2564 log.go:172] (0xc000382500) (1) Data frame sent\nI0401 13:55:08.615256 2564 log.go:172] (0xc0005b2420) (0xc000382500) Stream removed, broadcasting: 1\nI0401 13:55:08.615268 2564 log.go:172] (0xc0005b2420) Go away received\nI0401 13:55:08.615563 2564 log.go:172] (0xc0005b2420) (0xc000382500) Stream removed, broadcasting: 1\nI0401 13:55:08.615577 2564 log.go:172] (0xc0005b2420) (0xc00080e500) Stream removed, broadcasting: 3\nI0401 13:55:08.615583 2564 log.go:172] (0xc0005b2420) (0xc00087c000) Stream removed, broadcasting: 5\n" Apr 1 13:55:08.626: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 1 13:55:08.626: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 1 13:55:08.626: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 1 13:55:38.653: INFO: Deleting all statefulset in ns statefulset-1143 Apr 1 13:55:38.657: INFO: Scaling statefulset ss to 0 Apr 1 13:55:38.666: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 13:55:38.668: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:55:38.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1143" for this suite. Apr 1 13:55:44.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:55:44.792: INFO: namespace statefulset-1143 deletion completed in 6.104113762s • [SLOW TEST:98.248 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:55:44.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 1 13:55:44.852: INFO: Waiting up to 5m0s for pod "pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f" in namespace "emptydir-6886" to be "success or failure" Apr 1 13:55:44.856: INFO: Pod "pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515221ms Apr 1 13:55:46.861: INFO: Pod "pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008600003s Apr 1 13:55:48.864: INFO: Pod "pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01237887s STEP: Saw pod success Apr 1 13:55:48.864: INFO: Pod "pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f" satisfied condition "success or failure" Apr 1 13:55:48.867: INFO: Trying to get logs from node iruya-worker2 pod pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f container test-container: STEP: delete the pod Apr 1 13:55:48.886: INFO: Waiting for pod pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f to disappear Apr 1 13:55:48.888: INFO: Pod pod-831f845a-1ea6-4205-b758-6b5e0dfddb1f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:55:48.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6886" for this suite. Apr 1 13:55:54.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:55:55.098: INFO: namespace emptydir-6886 deletion completed in 6.206998795s • [SLOW TEST:10.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:55:55.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:55:59.235: INFO: Waiting up to 5m0s for pod "client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9" in namespace "pods-6820" to be "success or failure" Apr 1 13:55:59.247: INFO: Pod "client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.894289ms Apr 1 13:56:01.251: INFO: Pod "client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01627575s Apr 1 13:56:03.255: INFO: Pod "client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020760357s STEP: Saw pod success Apr 1 13:56:03.255: INFO: Pod "client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9" satisfied condition "success or failure" Apr 1 13:56:03.258: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9 container env3cont: STEP: delete the pod Apr 1 13:56:03.310: INFO: Waiting for pod client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9 to disappear Apr 1 13:56:03.318: INFO: Pod client-envvars-b53fe442-1586-4abf-ac12-2a7a78c6adc9 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:56:03.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6820" for this suite. Apr 1 13:56:41.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:56:41.414: INFO: namespace pods-6820 deletion completed in 38.092885361s • [SLOW TEST:46.315 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:56:41.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 13:56:45.493: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:56:45.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7425" for this suite. Apr 1 13:56:51.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:56:51.624: INFO: namespace container-runtime-7425 deletion completed in 6.098275083s • [SLOW TEST:10.210 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:56:51.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 13:56:51.687: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:56:55.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2597" for this suite. Apr 1 13:57:45.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:57:45.819: INFO: namespace pods-2597 deletion completed in 50.086624553s • [SLOW TEST:54.195 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:57:45.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1514ed87-c970-4647-8ddd-e99c4a833210 STEP: Creating a pod to test consume configMaps Apr 1 13:57:45.919: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646" in namespace "projected-9468" to be "success or failure" Apr 1 13:57:45.936: INFO: Pod "pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646": Phase="Pending", Reason="", readiness=false. Elapsed: 16.657211ms Apr 1 13:57:47.940: INFO: Pod "pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021008129s Apr 1 13:57:49.945: INFO: Pod "pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025661967s STEP: Saw pod success Apr 1 13:57:49.945: INFO: Pod "pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646" satisfied condition "success or failure" Apr 1 13:57:49.948: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646 container projected-configmap-volume-test: STEP: delete the pod Apr 1 13:57:49.980: INFO: Waiting for pod pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646 to disappear Apr 1 13:57:49.990: INFO: Pod pod-projected-configmaps-617a75c5-facd-400d-945f-468a6480c646 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:57:49.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9468" for this suite. Apr 1 13:57:56.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:57:56.113: INFO: namespace projected-9468 deletion completed in 6.118529381s • [SLOW TEST:10.293 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:57:56.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 1 13:57:56.196: INFO: Waiting up to 5m0s for pod "pod-c3833bec-88a0-4f94-a9f7-790362ff54f2" in namespace "emptydir-9672" to be "success or failure" Apr 1 13:57:56.200: INFO: Pod "pod-c3833bec-88a0-4f94-a9f7-790362ff54f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650029ms Apr 1 13:57:58.204: INFO: Pod "pod-c3833bec-88a0-4f94-a9f7-790362ff54f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008055579s Apr 1 13:58:00.209: INFO: Pod "pod-c3833bec-88a0-4f94-a9f7-790362ff54f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01268203s STEP: Saw pod success Apr 1 13:58:00.209: INFO: Pod "pod-c3833bec-88a0-4f94-a9f7-790362ff54f2" satisfied condition "success or failure" Apr 1 13:58:00.212: INFO: Trying to get logs from node iruya-worker2 pod pod-c3833bec-88a0-4f94-a9f7-790362ff54f2 container test-container: STEP: delete the pod Apr 1 13:58:00.245: INFO: Waiting for pod pod-c3833bec-88a0-4f94-a9f7-790362ff54f2 to disappear Apr 1 13:58:00.260: INFO: Pod pod-c3833bec-88a0-4f94-a9f7-790362ff54f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:58:00.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9672" for this suite. Apr 1 13:58:06.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:58:06.370: INFO: namespace emptydir-9672 deletion completed in 6.089999862s • [SLOW TEST:10.257 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:58:06.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-b2244f0a-f21a-43d8-8154-b9e962d6615a STEP: Creating a pod to test consume secrets Apr 1 13:58:06.472: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010" in namespace "projected-1929" to be "success or failure" Apr 1 13:58:06.514: INFO: Pod "pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010": Phase="Pending", Reason="", readiness=false. Elapsed: 41.929676ms Apr 1 13:58:08.573: INFO: Pod "pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101330807s Apr 1 13:58:10.577: INFO: Pod "pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105316267s STEP: Saw pod success Apr 1 13:58:10.577: INFO: Pod "pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010" satisfied condition "success or failure" Apr 1 13:58:10.582: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010 container secret-volume-test: STEP: delete the pod Apr 1 13:58:10.625: INFO: Waiting for pod pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010 to disappear Apr 1 13:58:10.632: INFO: Pod pod-projected-secrets-288d7395-28f7-452a-bcd4-b7988abdd010 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:58:10.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1929" for this suite. Apr 1 13:58:16.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:58:16.724: INFO: namespace projected-1929 deletion completed in 6.088416397s • [SLOW TEST:10.354 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:58:16.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-5h8m STEP: Creating a pod to test atomic-volume-subpath Apr 1 13:58:16.814: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5h8m" in namespace "subpath-5092" to be "success or failure" Apr 1 13:58:16.818: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.707221ms Apr 1 13:58:18.822: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008032483s Apr 1 13:58:20.827: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 4.012509413s Apr 1 13:58:22.831: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 6.016677084s Apr 1 13:58:24.834: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 8.019853248s Apr 1 13:58:26.839: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 10.02424211s Apr 1 13:58:28.843: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 12.02841735s Apr 1 13:58:30.847: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 14.032898728s Apr 1 13:58:32.852: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 16.037234028s Apr 1 13:58:34.856: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 18.041237839s Apr 1 13:58:36.860: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 20.045532978s Apr 1 13:58:38.864: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Running", Reason="", readiness=true. Elapsed: 22.049651243s Apr 1 13:58:40.868: INFO: Pod "pod-subpath-test-secret-5h8m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054101061s STEP: Saw pod success Apr 1 13:58:40.868: INFO: Pod "pod-subpath-test-secret-5h8m" satisfied condition "success or failure" Apr 1 13:58:40.872: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-5h8m container test-container-subpath-secret-5h8m: STEP: delete the pod Apr 1 13:58:40.901: INFO: Waiting for pod pod-subpath-test-secret-5h8m to disappear Apr 1 13:58:40.913: INFO: Pod pod-subpath-test-secret-5h8m no longer exists STEP: Deleting pod pod-subpath-test-secret-5h8m Apr 1 13:58:40.913: INFO: Deleting pod "pod-subpath-test-secret-5h8m" in namespace "subpath-5092" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:58:40.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5092" for this suite. Apr 1 13:58:46.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:58:47.007: INFO: namespace subpath-5092 deletion completed in 6.088741501s • [SLOW TEST:30.283 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:58:47.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 1 13:58:47.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6212,SelfLink:/api/v1/namespaces/watch-6212/configmaps/e2e-watch-test-resource-version,UID:75ab10a1-2413-44af-9531-ee03149169e7,ResourceVersion:3042153,Generation:0,CreationTimestamp:2020-04-01 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 1 13:58:47.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6212,SelfLink:/api/v1/namespaces/watch-6212/configmaps/e2e-watch-test-resource-version,UID:75ab10a1-2413-44af-9531-ee03149169e7,ResourceVersion:3042154,Generation:0,CreationTimestamp:2020-04-01 13:58:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:58:47.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6212" for this suite. Apr 1 13:58:53.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:58:53.200: INFO: namespace watch-6212 deletion completed in 6.089257446s • [SLOW TEST:6.193 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:58:53.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0401 13:59:33.288976 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 13:59:33.289: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:59:33.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4218" for this suite. Apr 1 13:59:41.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:59:41.381: INFO: namespace gc-4218 deletion completed in 8.088970101s • [SLOW TEST:48.181 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:59:41.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 1 13:59:41.540: INFO: Waiting up to 5m0s for pod "downward-api-56d53e08-c154-4c8a-afcf-af42ec871897" in namespace "downward-api-4140" to be "success or failure" Apr 1 13:59:41.549: INFO: Pod "downward-api-56d53e08-c154-4c8a-afcf-af42ec871897": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511692ms Apr 1 13:59:43.552: INFO: Pod "downward-api-56d53e08-c154-4c8a-afcf-af42ec871897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01211513s Apr 1 13:59:45.556: INFO: Pod "downward-api-56d53e08-c154-4c8a-afcf-af42ec871897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015923276s STEP: Saw pod success Apr 1 13:59:45.556: INFO: Pod "downward-api-56d53e08-c154-4c8a-afcf-af42ec871897" satisfied condition "success or failure" Apr 1 13:59:45.559: INFO: Trying to get logs from node iruya-worker pod downward-api-56d53e08-c154-4c8a-afcf-af42ec871897 container dapi-container: STEP: delete the pod Apr 1 13:59:45.600: INFO: Waiting for pod downward-api-56d53e08-c154-4c8a-afcf-af42ec871897 to disappear Apr 1 13:59:45.609: INFO: Pod downward-api-56d53e08-c154-4c8a-afcf-af42ec871897 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:59:45.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4140" for this suite. Apr 1 13:59:51.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 13:59:51.705: INFO: namespace downward-api-4140 deletion completed in 6.093248375s • [SLOW TEST:10.323 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 13:59:51.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 13:59:55.856: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 13:59:55.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5599" for this suite. Apr 1 14:00:01.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:00:01.979: INFO: namespace container-runtime-5599 deletion completed in 6.087237749s • [SLOW TEST:10.274 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:00:01.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 1 14:00:06.606: INFO: Successfully updated pod "annotationupdate3503cb80-0315-43c8-a974-e61b84835dd1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:00:08.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4866" for this suite. Apr 1 14:00:30.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:00:30.718: INFO: namespace projected-4866 deletion completed in 22.092007676s • [SLOW TEST:28.738 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:00:30.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:00:30.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3116" for this suite. Apr 1 14:00:52.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:00:52.941: INFO: namespace pods-3116 deletion completed in 22.0902875s • [SLOW TEST:22.222 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:00:52.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:00:53.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a" in namespace "projected-9274" to be "success or failure" Apr 1 14:00:53.019: INFO: Pod "downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.343889ms Apr 1 14:00:55.028: INFO: Pod "downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012669385s Apr 1 14:00:57.033: INFO: Pod "downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017588427s STEP: Saw pod success Apr 1 14:00:57.033: INFO: Pod "downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a" satisfied condition "success or failure" Apr 1 14:00:57.036: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a container client-container: STEP: delete the pod Apr 1 14:00:57.092: INFO: Waiting for pod downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a to disappear Apr 1 14:00:57.097: INFO: Pod downwardapi-volume-5f1e50e1-f9b2-4654-b62b-3a0948f2e05a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:00:57.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9274" for this suite. Apr 1 14:01:03.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:01:03.185: INFO: namespace projected-9274 deletion completed in 6.08457422s • [SLOW TEST:10.244 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:01:03.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 1 14:01:07.784: INFO: Successfully updated pod "labelsupdate3ed025c2-bf8f-42e3-84c4-7d2f5c57ca46" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:01:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5912" for this suite. Apr 1 14:01:31.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:01:31.919: INFO: namespace projected-5912 deletion completed in 22.099174425s • [SLOW TEST:28.734 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:01:31.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:01:37.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7360" for this suite. Apr 1 14:01:59.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:01:59.215: INFO: namespace replication-controller-7360 deletion completed in 22.128111367s • [SLOW TEST:27.296 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:01:59.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:01:59.290: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 1 14:01:59.297: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:01:59.342: INFO: Number of nodes with available pods: 0 Apr 1 14:01:59.342: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:02:00.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:00.350: INFO: Number of nodes with available pods: 0 Apr 1 14:02:00.350: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:02:01.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:01.351: INFO: Number of nodes with available pods: 0 Apr 1 14:02:01.351: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:02:02.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:02.351: INFO: Number of nodes with available pods: 0 Apr 1 14:02:02.351: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:02:03.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:03.351: INFO: Number of nodes with available pods: 1 Apr 1 14:02:03.351: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:02:04.348: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:04.351: INFO: Number of nodes with available pods: 2 Apr 1 14:02:04.351: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 1 14:02:04.405: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:04.405: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:04.420: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:05.423: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:05.423: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:05.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:06.423: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:06.423: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:06.423: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:06.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:07.423: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:07.423: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:07.423: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:07.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:08.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:08.424: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:08.424: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:08.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:09.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:09.424: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:09.424: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:09.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:10.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:10.424: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:10.424: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:10.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:11.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:11.424: INFO: Wrong image for pod: daemon-set-xx6vr. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:11.424: INFO: Pod daemon-set-xx6vr is not available Apr 1 14:02:11.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:12.424: INFO: Pod daemon-set-5c8wz is not available Apr 1 14:02:12.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:12.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:13.424: INFO: Pod daemon-set-5c8wz is not available Apr 1 14:02:13.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:13.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:14.423: INFO: Pod daemon-set-5c8wz is not available Apr 1 14:02:14.423: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:14.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:15.427: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:15.465: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:16.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:16.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:17.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:17.424: INFO: Pod daemon-set-qvxn4 is not available Apr 1 14:02:17.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:18.425: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:18.425: INFO: Pod daemon-set-qvxn4 is not available Apr 1 14:02:18.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:19.425: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:19.425: INFO: Pod daemon-set-qvxn4 is not available Apr 1 14:02:19.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:20.424: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:20.424: INFO: Pod daemon-set-qvxn4 is not available Apr 1 14:02:20.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:21.425: INFO: Wrong image for pod: daemon-set-qvxn4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 1 14:02:21.425: INFO: Pod daemon-set-qvxn4 is not available Apr 1 14:02:21.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:22.424: INFO: Pod daemon-set-tz62l is not available Apr 1 14:02:22.431: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 1 14:02:22.435: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:22.438: INFO: Number of nodes with available pods: 1 Apr 1 14:02:22.438: INFO: Node iruya-worker2 is running more than one daemon pod Apr 1 14:02:23.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:23.461: INFO: Number of nodes with available pods: 1 Apr 1 14:02:23.461: INFO: Node iruya-worker2 is running more than one daemon pod Apr 1 14:02:24.443: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:24.447: INFO: Number of nodes with available pods: 1 Apr 1 14:02:24.447: INFO: Node iruya-worker2 is running more than one daemon pod Apr 1 14:02:25.443: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:02:25.447: INFO: Number of nodes with available pods: 2 Apr 1 14:02:25.447: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-700, will wait for the garbage collector to delete the pods Apr 1 14:02:25.524: INFO: Deleting DaemonSet.extensions daemon-set took: 6.722351ms Apr 1 14:02:25.824: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27771ms Apr 1 14:02:28.540: INFO: Number of nodes with available pods: 0 Apr 1 14:02:28.540: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 14:02:28.543: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-700/daemonsets","resourceVersion":"3043020"},"items":null} Apr 1 14:02:28.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-700/pods","resourceVersion":"3043020"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:02:28.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-700" for this suite. Apr 1 14:02:34.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:02:34.676: INFO: namespace daemonsets-700 deletion completed in 6.120705291s • [SLOW TEST:35.461 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:02:34.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 1 14:02:34.760: INFO: Waiting up to 5m0s for pod "pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a" in namespace "emptydir-3969" to be "success or failure" Apr 1 14:02:34.781: INFO: Pod "pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.627837ms Apr 1 14:02:36.829: INFO: Pod "pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069045775s Apr 1 14:02:38.833: INFO: Pod "pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073706434s STEP: Saw pod success Apr 1 14:02:38.833: INFO: Pod "pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a" satisfied condition "success or failure" Apr 1 14:02:38.836: INFO: Trying to get logs from node iruya-worker2 pod pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a container test-container: STEP: delete the pod Apr 1 14:02:38.949: INFO: Waiting for pod pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a to disappear Apr 1 14:02:38.952: INFO: Pod pod-f83a727c-3c75-4d7e-b945-a18ba8b6943a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:02:38.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3969" for this suite. Apr 1 14:02:45.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:02:45.198: INFO: namespace emptydir-3969 deletion completed in 6.232691437s • [SLOW TEST:10.521 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:02:45.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-f28e9dc6-19f5-40bc-a939-d909433587d4 STEP: Creating secret with name s-test-opt-upd-7de881a9-688f-4492-b3d3-a2111ea399a8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f28e9dc6-19f5-40bc-a939-d909433587d4 STEP: Updating secret s-test-opt-upd-7de881a9-688f-4492-b3d3-a2111ea399a8 STEP: Creating secret with name s-test-opt-create-c55eafca-a086-49c1-bc13-58b10ad4b5c2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:04:07.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5321" for this suite. Apr 1 14:04:29.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:04:29.825: INFO: namespace projected-5321 deletion completed in 22.091756863s • [SLOW TEST:104.627 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:04:29.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0dfaa39d-6d97-4313-a21a-0cac8cd9b84b STEP: Creating a pod to test consume configMaps Apr 1 14:04:29.930: INFO: Waiting up to 5m0s for pod "pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac" in namespace "configmap-8532" to be "success or failure" Apr 1 14:04:29.933: INFO: Pod "pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.752862ms Apr 1 14:04:32.014: INFO: Pod "pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084726995s Apr 1 14:04:34.018: INFO: Pod "pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088438477s STEP: Saw pod success Apr 1 14:04:34.018: INFO: Pod "pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac" satisfied condition "success or failure" Apr 1 14:04:34.021: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac container configmap-volume-test: STEP: delete the pod Apr 1 14:04:34.037: INFO: Waiting for pod pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac to disappear Apr 1 14:04:34.042: INFO: Pod pod-configmaps-55decfc2-4fa9-42e4-aeed-87f7881be6ac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:04:34.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8532" for this suite. Apr 1 14:04:40.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:04:40.172: INFO: namespace configmap-8532 deletion completed in 6.126520584s • [SLOW TEST:10.346 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:04:40.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:04:44.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2728" for this suite. Apr 1 14:04:50.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:04:50.349: INFO: namespace kubelet-test-2728 deletion completed in 6.094324745s • [SLOW TEST:10.177 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:04:50.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:04:50.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 1 14:04:50.557: INFO: stderr: "" Apr 1 14:04:50.557: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:04:50.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7564" for this suite. Apr 1 14:04:56.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:04:56.674: INFO: namespace kubectl-7564 deletion completed in 6.106994227s • [SLOW TEST:6.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:04:56.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 1 14:04:56.742: INFO: Waiting up to 5m0s for pod "pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e" in namespace "emptydir-4159" to be "success or failure" Apr 1 14:04:56.764: INFO: Pod "pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 22.260624ms Apr 1 14:04:58.768: INFO: Pod "pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026317453s Apr 1 14:05:00.773: INFO: Pod "pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031133652s STEP: Saw pod success Apr 1 14:05:00.773: INFO: Pod "pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e" satisfied condition "success or failure" Apr 1 14:05:00.776: INFO: Trying to get logs from node iruya-worker2 pod pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e container test-container: STEP: delete the pod Apr 1 14:05:00.798: INFO: Waiting for pod pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e to disappear Apr 1 14:05:00.812: INFO: Pod pod-2f732bfd-c5f3-4100-ad8a-a561384f3c2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:00.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4159" for this suite. Apr 1 14:05:06.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:05:06.912: INFO: namespace emptydir-4159 deletion completed in 6.097177622s • [SLOW TEST:10.237 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:05:06.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 1 14:05:06.972: INFO: Waiting up to 5m0s for pod "pod-4fe80839-df54-429f-97a3-8678d888163a" in namespace "emptydir-2642" to be "success or failure" Apr 1 14:05:06.976: INFO: Pod "pod-4fe80839-df54-429f-97a3-8678d888163a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292045ms Apr 1 14:05:08.980: INFO: Pod "pod-4fe80839-df54-429f-97a3-8678d888163a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007943641s Apr 1 14:05:10.984: INFO: Pod "pod-4fe80839-df54-429f-97a3-8678d888163a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011977509s STEP: Saw pod success Apr 1 14:05:10.984: INFO: Pod "pod-4fe80839-df54-429f-97a3-8678d888163a" satisfied condition "success or failure" Apr 1 14:05:10.987: INFO: Trying to get logs from node iruya-worker pod pod-4fe80839-df54-429f-97a3-8678d888163a container test-container: STEP: delete the pod Apr 1 14:05:11.023: INFO: Waiting for pod pod-4fe80839-df54-429f-97a3-8678d888163a to disappear Apr 1 14:05:11.041: INFO: Pod pod-4fe80839-df54-429f-97a3-8678d888163a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:11.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2642" for this suite. Apr 1 14:05:17.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:05:17.139: INFO: namespace emptydir-2642 deletion completed in 6.095411037s • [SLOW TEST:10.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:05:17.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:05:17.188: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 1 14:05:19.232: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:20.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2533" for this suite. Apr 1 14:05:26.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:05:26.392: INFO: namespace replication-controller-2533 deletion completed in 6.128492584s • [SLOW TEST:9.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:05:26.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 1 14:05:26.484: INFO: Waiting up to 5m0s for pod "var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a" in namespace "var-expansion-1429" to be "success or failure" Apr 1 14:05:26.528: INFO: Pod "var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a": Phase="Pending", Reason="", readiness=false. Elapsed: 44.610667ms Apr 1 14:05:28.573: INFO: Pod "var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089270664s Apr 1 14:05:30.577: INFO: Pod "var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093540292s STEP: Saw pod success Apr 1 14:05:30.577: INFO: Pod "var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a" satisfied condition "success or failure" Apr 1 14:05:30.580: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a container dapi-container: STEP: delete the pod Apr 1 14:05:30.614: INFO: Waiting for pod var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a to disappear Apr 1 14:05:30.623: INFO: Pod var-expansion-70b36509-81f0-40d2-bdd1-ae523826510a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:30.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1429" for this suite. Apr 1 14:05:36.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:05:36.753: INFO: namespace var-expansion-1429 deletion completed in 6.126523099s • [SLOW TEST:10.361 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:05:36.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d0291f89-69e3-4e2f-afcf-238a384f35bb STEP: Creating a pod to test consume secrets Apr 1 14:05:36.842: INFO: Waiting up to 5m0s for pod "pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56" in namespace "secrets-5489" to be "success or failure" Apr 1 14:05:36.845: INFO: Pod "pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855253ms Apr 1 14:05:38.850: INFO: Pod "pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008211004s Apr 1 14:05:40.854: INFO: Pod "pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012680936s STEP: Saw pod success Apr 1 14:05:40.854: INFO: Pod "pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56" satisfied condition "success or failure" Apr 1 14:05:40.857: INFO: Trying to get logs from node iruya-worker pod pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56 container secret-volume-test: STEP: delete the pod Apr 1 14:05:40.890: INFO: Waiting for pod pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56 to disappear Apr 1 14:05:40.893: INFO: Pod pod-secrets-ee79140a-f9be-4d42-a79b-4df0009c7d56 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:40.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5489" for this suite. Apr 1 14:05:46.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:05:46.992: INFO: namespace secrets-5489 deletion completed in 6.095433779s • [SLOW TEST:10.238 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:05:46.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 1 14:05:47.051: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 1 14:05:56.124: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:05:56.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2730" for this suite. Apr 1 14:06:02.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:06:02.251: INFO: namespace pods-2730 deletion completed in 6.11928167s • [SLOW TEST:15.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:06:02.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c04afebe-9196-4b35-b4fe-cb16d3eb2522 STEP: Creating a pod to test consume secrets Apr 1 14:06:02.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1" in namespace "projected-1034" to be "success or failure" Apr 1 14:06:02.315: INFO: Pod "pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.295396ms Apr 1 14:06:04.319: INFO: Pod "pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008025458s Apr 1 14:06:06.328: INFO: Pod "pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017598502s STEP: Saw pod success Apr 1 14:06:06.328: INFO: Pod "pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1" satisfied condition "success or failure" Apr 1 14:06:06.331: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1 container projected-secret-volume-test: STEP: delete the pod Apr 1 14:06:06.370: INFO: Waiting for pod pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1 to disappear Apr 1 14:06:06.381: INFO: Pod pod-projected-secrets-8323760c-6a7c-4b56-adc7-57a9d4d6a0f1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:06:06.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1034" for this suite. Apr 1 14:06:12.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:06:12.477: INFO: namespace projected-1034 deletion completed in 6.092193761s • [SLOW TEST:10.226 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:06:12.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1cf8ed70-1e9f-463f-97f3-8a4f3f85b260 STEP: Creating a pod to test consume configMaps Apr 1 14:06:12.549: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59" in namespace "projected-8487" to be "success or failure" Apr 1 14:06:12.553: INFO: Pod "pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59": Phase="Pending", Reason="", readiness=false. Elapsed: 3.430697ms Apr 1 14:06:14.557: INFO: Pod "pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008256723s Apr 1 14:06:16.562: INFO: Pod "pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012412668s STEP: Saw pod success Apr 1 14:06:16.562: INFO: Pod "pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59" satisfied condition "success or failure" Apr 1 14:06:16.565: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59 container projected-configmap-volume-test: STEP: delete the pod Apr 1 14:06:16.590: INFO: Waiting for pod pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59 to disappear Apr 1 14:06:16.594: INFO: Pod pod-projected-configmaps-1713c830-d4a5-42e4-8c60-2c73cda7ee59 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:06:16.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8487" for this suite. Apr 1 14:06:22.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:06:22.706: INFO: namespace projected-8487 deletion completed in 6.107719278s • [SLOW TEST:10.229 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:06:22.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9297 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9297 to expose endpoints map[] Apr 1 14:06:22.826: INFO: Get endpoints failed (10.761319ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 1 14:06:23.828: INFO: successfully validated that service multi-endpoint-test in namespace services-9297 exposes endpoints map[] (1.013577635s elapsed) STEP: Creating pod pod1 in namespace services-9297 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9297 to expose endpoints map[pod1:[100]] Apr 1 14:06:26.912: INFO: successfully validated that service multi-endpoint-test in namespace services-9297 exposes endpoints map[pod1:[100]] (3.07703163s elapsed) STEP: Creating pod pod2 in namespace services-9297 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9297 to expose endpoints map[pod1:[100] pod2:[101]] Apr 1 14:06:30.041: INFO: successfully validated that service multi-endpoint-test in namespace services-9297 exposes endpoints map[pod1:[100] pod2:[101]] (3.126062171s elapsed) STEP: Deleting pod pod1 in namespace services-9297 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9297 to expose endpoints map[pod2:[101]] Apr 1 14:06:31.068: INFO: successfully validated that service multi-endpoint-test in namespace services-9297 exposes endpoints map[pod2:[101]] (1.022628571s elapsed) STEP: Deleting pod pod2 in namespace services-9297 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9297 to expose endpoints map[] Apr 1 14:06:32.161: INFO: successfully validated that service multi-endpoint-test in namespace services-9297 exposes endpoints map[] (1.087107311s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:06:32.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9297" for this suite. Apr 1 14:06:54.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:06:54.476: INFO: namespace services-9297 deletion completed in 22.110708468s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.769 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:06:54.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 1 14:06:58.579: INFO: Pod pod-hostip-fd0bc1f7-1f87-470f-872d-d6e988afcc66 has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:06:58.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7235" for this suite. Apr 1 14:07:20.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:07:20.666: INFO: namespace pods-7235 deletion completed in 22.082424844s • [SLOW TEST:26.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:07:20.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2937/configmap-test-c2135b9e-c349-40ce-9a26-159e0a226c02 STEP: Creating a pod to test consume configMaps Apr 1 14:07:20.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d" in namespace "configmap-2937" to be "success or failure" Apr 1 14:07:20.766: INFO: Pod "pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332738ms Apr 1 14:07:22.770: INFO: Pod "pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008640864s Apr 1 14:07:24.774: INFO: Pod "pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012967664s STEP: Saw pod success Apr 1 14:07:24.774: INFO: Pod "pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d" satisfied condition "success or failure" Apr 1 14:07:24.777: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d container env-test: STEP: delete the pod Apr 1 14:07:24.798: INFO: Waiting for pod pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d to disappear Apr 1 14:07:24.801: INFO: Pod pod-configmaps-10e8bcfd-e641-4216-8739-c1cbbb6e0b8d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:07:24.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2937" for this suite. Apr 1 14:07:30.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:07:30.920: INFO: namespace configmap-2937 deletion completed in 6.114976801s • [SLOW TEST:10.254 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:07:30.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:07:36.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9295" for this suite. Apr 1 14:07:42.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:07:42.632: INFO: namespace watch-9295 deletion completed in 6.204243049s • [SLOW TEST:11.711 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:07:42.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-3238 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3238 to expose endpoints map[] Apr 1 14:07:42.753: INFO: successfully validated that service endpoint-test2 in namespace services-3238 exposes endpoints map[] (28.026523ms elapsed) STEP: Creating pod pod1 in namespace services-3238 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3238 to expose endpoints map[pod1:[80]] Apr 1 14:07:46.819: INFO: successfully validated that service endpoint-test2 in namespace services-3238 exposes endpoints map[pod1:[80]] (4.060067839s elapsed) STEP: Creating pod pod2 in namespace services-3238 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3238 to expose endpoints map[pod1:[80] pod2:[80]] Apr 1 14:07:49.936: INFO: successfully validated that service endpoint-test2 in namespace services-3238 exposes endpoints map[pod1:[80] pod2:[80]] (3.111921221s elapsed) STEP: Deleting pod pod1 in namespace services-3238 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3238 to expose endpoints map[pod2:[80]] Apr 1 14:07:50.963: INFO: successfully validated that service endpoint-test2 in namespace services-3238 exposes endpoints map[pod2:[80]] (1.023025768s elapsed) STEP: Deleting pod pod2 in namespace services-3238 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3238 to expose endpoints map[] Apr 1 14:07:51.973: INFO: successfully validated that service endpoint-test2 in namespace services-3238 exposes endpoints map[] (1.006292266s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:07:52.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3238" for this suite. Apr 1 14:08:14.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:08:14.105: INFO: namespace services-3238 deletion completed in 22.090329516s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.473 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:08:14.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-611 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 1 14:08:14.185: INFO: Found 0 stateful pods, waiting for 3 Apr 1 14:08:24.191: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 14:08:24.191: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 14:08:24.191: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 1 14:08:24.219: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 1 14:08:34.260: INFO: Updating stateful set ss2 Apr 1 14:08:34.282: INFO: Waiting for Pod statefulset-611/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 1 14:08:44.432: INFO: Found 2 stateful pods, waiting for 3 Apr 1 14:08:54.462: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 1 14:08:54.462: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 1 14:08:54.462: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 1 14:08:54.484: INFO: Updating stateful set ss2 Apr 1 14:08:54.511: INFO: Waiting for Pod statefulset-611/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 14:09:04.537: INFO: Updating stateful set ss2 Apr 1 14:09:04.568: INFO: Waiting for StatefulSet statefulset-611/ss2 to complete update Apr 1 14:09:04.568: INFO: Waiting for Pod statefulset-611/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 1 14:09:14.576: INFO: Waiting for StatefulSet statefulset-611/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 1 14:09:24.577: INFO: Deleting all statefulset in ns statefulset-611 Apr 1 14:09:24.580: INFO: Scaling statefulset ss2 to 0 Apr 1 14:09:44.612: INFO: Waiting for statefulset status.replicas updated to 0 Apr 1 14:09:44.615: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:09:44.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-611" for this suite. Apr 1 14:09:50.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:09:50.725: INFO: namespace statefulset-611 deletion completed in 6.089718143s • [SLOW TEST:96.620 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:09:50.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 1 14:09:58.863: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:09:58.882: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:00.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:00.887: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:02.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:02.887: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:04.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:04.894: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:06.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:06.886: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:08.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:08.887: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:10.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:10.887: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:12.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:12.886: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:14.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:14.895: INFO: Pod pod-with-prestop-exec-hook still exists Apr 1 14:10:16.882: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 1 14:10:16.886: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:10:16.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2231" for this suite. Apr 1 14:10:38.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:10:38.996: INFO: namespace container-lifecycle-hook-2231 deletion completed in 22.098952064s • [SLOW TEST:48.270 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:10:38.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-b4293bfd-9754-4b5a-b027-7e8509388df8 in namespace container-probe-7744 Apr 1 14:10:43.091: INFO: Started pod busybox-b4293bfd-9754-4b5a-b027-7e8509388df8 in namespace container-probe-7744 STEP: checking the pod's current state and verifying that restartCount is present Apr 1 14:10:43.094: INFO: Initial restart count of pod busybox-b4293bfd-9754-4b5a-b027-7e8509388df8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:14:44.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7744" for this suite. Apr 1 14:14:50.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:14:50.155: INFO: namespace container-probe-7744 deletion completed in 6.117881932s • [SLOW TEST:251.158 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:14:50.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:14:50.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3105" for this suite. Apr 1 14:14:56.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:14:56.335: INFO: namespace services-3105 deletion completed in 6.091359309s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.180 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:14:56.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:14:56.407: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.56315ms) Apr 1 14:14:56.410: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.217509ms) Apr 1 14:14:56.414: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.296948ms) Apr 1 14:14:56.417: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.341704ms) Apr 1 14:14:56.420: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.968885ms) Apr 1 14:14:56.450: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 30.208456ms) Apr 1 14:14:56.473: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 22.871631ms) Apr 1 14:14:56.476: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.253061ms) Apr 1 14:14:56.480: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.334065ms) Apr 1 14:14:56.483: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.295007ms) Apr 1 14:14:56.486: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.362508ms) Apr 1 14:14:56.489: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.791689ms) Apr 1 14:14:56.492: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.931805ms) Apr 1 14:14:56.495: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.921556ms) Apr 1 14:14:56.498: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.062358ms) Apr 1 14:14:56.501: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.971007ms) Apr 1 14:14:56.504: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.141137ms) Apr 1 14:14:56.508: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.102446ms) Apr 1 14:14:56.511: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.975468ms) Apr 1 14:14:56.514: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.067378ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:14:56.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8713" for this suite. Apr 1 14:15:02.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:02.675: INFO: namespace proxy-8713 deletion completed in 6.158476334s • [SLOW TEST:6.340 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:02.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 1 14:15:02.715: INFO: Waiting up to 5m0s for pod "pod-875ed92e-6334-4899-9194-2f06de6a702e" in namespace "emptydir-7607" to be "success or failure" Apr 1 14:15:02.728: INFO: Pod "pod-875ed92e-6334-4899-9194-2f06de6a702e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.491246ms Apr 1 14:15:04.743: INFO: Pod "pod-875ed92e-6334-4899-9194-2f06de6a702e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027774748s Apr 1 14:15:06.747: INFO: Pod "pod-875ed92e-6334-4899-9194-2f06de6a702e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031922558s STEP: Saw pod success Apr 1 14:15:06.747: INFO: Pod "pod-875ed92e-6334-4899-9194-2f06de6a702e" satisfied condition "success or failure" Apr 1 14:15:06.750: INFO: Trying to get logs from node iruya-worker2 pod pod-875ed92e-6334-4899-9194-2f06de6a702e container test-container: STEP: delete the pod Apr 1 14:15:06.787: INFO: Waiting for pod pod-875ed92e-6334-4899-9194-2f06de6a702e to disappear Apr 1 14:15:06.790: INFO: Pod pod-875ed92e-6334-4899-9194-2f06de6a702e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:06.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7607" for this suite. Apr 1 14:15:12.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:12.884: INFO: namespace emptydir-7607 deletion completed in 6.090728564s • [SLOW TEST:10.208 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:12.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:15:12.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de" in namespace "downward-api-402" to be "success or failure" Apr 1 14:15:12.940: INFO: Pod "downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de": Phase="Pending", Reason="", readiness=false. Elapsed: 16.342252ms Apr 1 14:15:14.952: INFO: Pod "downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028064972s Apr 1 14:15:16.956: INFO: Pod "downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032522986s STEP: Saw pod success Apr 1 14:15:16.956: INFO: Pod "downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de" satisfied condition "success or failure" Apr 1 14:15:16.960: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de container client-container: STEP: delete the pod Apr 1 14:15:16.990: INFO: Waiting for pod downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de to disappear Apr 1 14:15:16.995: INFO: Pod downwardapi-volume-0dce2aae-a5ae-4d54-ab4f-705df9ec30de no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:16.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-402" for this suite. Apr 1 14:15:23.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:23.106: INFO: namespace downward-api-402 deletion completed in 6.107264364s • [SLOW TEST:10.222 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:23.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 1 14:15:23.170: INFO: Waiting up to 5m0s for pod "pod-afcbf513-1025-4f28-9192-a6cc65308998" in namespace "emptydir-8830" to be "success or failure" Apr 1 14:15:23.216: INFO: Pod "pod-afcbf513-1025-4f28-9192-a6cc65308998": Phase="Pending", Reason="", readiness=false. Elapsed: 45.319563ms Apr 1 14:15:25.220: INFO: Pod "pod-afcbf513-1025-4f28-9192-a6cc65308998": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04971536s Apr 1 14:15:27.251: INFO: Pod "pod-afcbf513-1025-4f28-9192-a6cc65308998": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080890237s STEP: Saw pod success Apr 1 14:15:27.251: INFO: Pod "pod-afcbf513-1025-4f28-9192-a6cc65308998" satisfied condition "success or failure" Apr 1 14:15:27.254: INFO: Trying to get logs from node iruya-worker2 pod pod-afcbf513-1025-4f28-9192-a6cc65308998 container test-container: STEP: delete the pod Apr 1 14:15:27.305: INFO: Waiting for pod pod-afcbf513-1025-4f28-9192-a6cc65308998 to disappear Apr 1 14:15:27.324: INFO: Pod pod-afcbf513-1025-4f28-9192-a6cc65308998 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:27.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8830" for this suite. Apr 1 14:15:33.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:33.416: INFO: namespace emptydir-8830 deletion completed in 6.088837576s • [SLOW TEST:10.309 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:33.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 1 14:15:37.499: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4698" for this suite. Apr 1 14:15:43.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:43.623: INFO: namespace container-runtime-4698 deletion completed in 6.092788341s • [SLOW TEST:10.206 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:43.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:15:43.704: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.082405ms) Apr 1 14:15:43.708: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.183617ms) Apr 1 14:15:43.710: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.93749ms) Apr 1 14:15:43.713: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.726871ms) Apr 1 14:15:43.716: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.634904ms) Apr 1 14:15:43.719: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.952433ms) Apr 1 14:15:43.722: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.219237ms) Apr 1 14:15:43.725: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.859754ms) Apr 1 14:15:43.728: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.569082ms) Apr 1 14:15:43.730: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.867027ms) Apr 1 14:15:43.733: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.418961ms) Apr 1 14:15:43.736: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.794924ms) Apr 1 14:15:43.739: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.277372ms) Apr 1 14:15:43.743: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.476502ms) Apr 1 14:15:43.746: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.535709ms) Apr 1 14:15:43.749: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.107719ms) Apr 1 14:15:43.752: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.073676ms) Apr 1 14:15:43.756: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.197368ms) Apr 1 14:15:43.759: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.134099ms) Apr 1 14:15:43.762: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.082219ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:43.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6963" for this suite. Apr 1 14:15:49.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:15:49.855: INFO: namespace proxy-6963 deletion completed in 6.0900424s • [SLOW TEST:6.232 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:15:49.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 1 14:15:56.779: INFO: 0 pods remaining Apr 1 14:15:56.779: INFO: 0 pods has nil DeletionTimestamp Apr 1 14:15:56.779: INFO: Apr 1 14:15:57.402: INFO: 0 pods remaining Apr 1 14:15:57.402: INFO: 0 pods has nil DeletionTimestamp Apr 1 14:15:57.402: INFO: STEP: Gathering metrics W0401 14:15:58.281290 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 1 14:15:58.281: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:15:58.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4544" for this suite. Apr 1 14:16:04.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:16:04.419: INFO: namespace gc-4544 deletion completed in 6.136108012s • [SLOW TEST:14.563 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:16:04.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:16:30.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9469" for this suite. Apr 1 14:16:36.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:16:36.750: INFO: namespace namespaces-9469 deletion completed in 6.133067317s STEP: Destroying namespace "nsdeletetest-5796" for this suite. Apr 1 14:16:36.752: INFO: Namespace nsdeletetest-5796 was already deleted STEP: Destroying namespace "nsdeletetest-395" for this suite. Apr 1 14:16:42.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:16:42.841: INFO: namespace nsdeletetest-395 deletion completed in 6.088464348s • [SLOW TEST:38.421 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:16:42.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 1 14:16:42.939: INFO: Waiting up to 5m0s for pod "pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff" in namespace "emptydir-6147" to be "success or failure" Apr 1 14:16:42.943: INFO: Pod "pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.843097ms Apr 1 14:16:44.965: INFO: Pod "pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026441274s Apr 1 14:16:46.969: INFO: Pod "pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030466322s STEP: Saw pod success Apr 1 14:16:46.970: INFO: Pod "pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff" satisfied condition "success or failure" Apr 1 14:16:46.972: INFO: Trying to get logs from node iruya-worker pod pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff container test-container: STEP: delete the pod Apr 1 14:16:46.990: INFO: Waiting for pod pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff to disappear Apr 1 14:16:47.019: INFO: Pod pod-90157a0f-32be-4af2-bb16-c3bd4b9641ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:16:47.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6147" for this suite. Apr 1 14:16:53.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:16:53.167: INFO: namespace emptydir-6147 deletion completed in 6.14402023s • [SLOW TEST:10.326 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:16:53.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 1 14:16:53.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 1 14:16:53.425: INFO: stderr: "" Apr 1 14:16:53.425: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:16:53.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5648" for this suite. Apr 1 14:16:59.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:16:59.564: INFO: namespace kubectl-5648 deletion completed in 6.133300149s • [SLOW TEST:6.397 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:16:59.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5150.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.1.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.1.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.1.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.1.106_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5150.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5150.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5150.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5150.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5150.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5150.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 106.1.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.1.106_udp@PTR;check="$$(dig +tcp +noall +answer +search 106.1.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.1.106_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 14:17:05.823: INFO: Unable to read wheezy_udp@dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.830: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.860: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.863: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:05.882: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@dns-test-service.dns-5150.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:10.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:10.897: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:10.923: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:10.926: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:10.941: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:15.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:15.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:15.929: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:15.932: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:15.951: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:20.894: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:20.897: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:20.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:20.918: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:20.933: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:25.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:25.898: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:25.926: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:25.929: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:25.949: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:30.895: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:30.899: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:30.928: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:30.931: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local from pod dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864: the server could not find the requested resource (get pods dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864) Apr 1 14:17:30.943: INFO: Lookups using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5150.svc.cluster.local] Apr 1 14:17:35.950: INFO: DNS probes using dns-5150/dns-test-54c80d98-10ce-4e9c-b15f-102aee1c9864 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:17:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5150" for this suite. Apr 1 14:17:42.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:17:42.557: INFO: namespace dns-5150 deletion completed in 6.105192707s • [SLOW TEST:42.993 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:17:42.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 1 14:17:50.696: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:17:50.705: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:17:52.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:17:52.709: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:17:54.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:17:54.710: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:17:56.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:17:56.711: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:17:58.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:17:58.710: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:18:00.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:18:00.710: INFO: Pod pod-with-prestop-http-hook still exists Apr 1 14:18:02.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 1 14:18:02.709: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:18:02.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1605" for this suite. Apr 1 14:18:24.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:18:24.814: INFO: namespace container-lifecycle-hook-1605 deletion completed in 22.094913637s • [SLOW TEST:42.257 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:18:24.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 1 14:18:29.981: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:18:31.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4179" for this suite. Apr 1 14:18:53.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:18:53.187: INFO: namespace replicaset-4179 deletion completed in 22.181040922s • [SLOW TEST:28.372 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:18:53.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:18:53.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38" in namespace "projected-1771" to be "success or failure" Apr 1 14:18:53.258: INFO: Pod "downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013028ms Apr 1 14:18:55.285: INFO: Pod "downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030809086s Apr 1 14:18:57.289: INFO: Pod "downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035118261s STEP: Saw pod success Apr 1 14:18:57.289: INFO: Pod "downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38" satisfied condition "success or failure" Apr 1 14:18:57.292: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38 container client-container: STEP: delete the pod Apr 1 14:18:57.329: INFO: Waiting for pod downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38 to disappear Apr 1 14:18:57.335: INFO: Pod downwardapi-volume-8ac80826-b9d2-44ab-b65b-9ad6a974bc38 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:18:57.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1771" for this suite. Apr 1 14:19:03.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:19:03.437: INFO: namespace projected-1771 deletion completed in 6.099920425s • [SLOW TEST:10.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:19:03.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:19:03.491: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57" in namespace "downward-api-4768" to be "success or failure" Apr 1 14:19:03.494: INFO: Pod "downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.410808ms Apr 1 14:19:05.498: INFO: Pod "downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00782125s Apr 1 14:19:07.502: INFO: Pod "downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011486255s STEP: Saw pod success Apr 1 14:19:07.502: INFO: Pod "downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57" satisfied condition "success or failure" Apr 1 14:19:07.504: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57 container client-container: STEP: delete the pod Apr 1 14:19:07.519: INFO: Waiting for pod downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57 to disappear Apr 1 14:19:07.548: INFO: Pod downwardapi-volume-af7fafc5-8037-4875-9a76-60ece63d1b57 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:19:07.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4768" for this suite. Apr 1 14:19:13.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:19:13.652: INFO: namespace downward-api-4768 deletion completed in 6.099590284s • [SLOW TEST:10.214 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:19:13.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-782f4bd7-e845-46ae-913f-a54c55594c1b STEP: Creating a pod to test consume secrets Apr 1 14:19:13.712: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd" in namespace "projected-4936" to be "success or failure" Apr 1 14:19:13.716: INFO: Pod "pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.466424ms Apr 1 14:19:15.719: INFO: Pod "pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006737323s Apr 1 14:19:17.723: INFO: Pod "pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010684922s STEP: Saw pod success Apr 1 14:19:17.723: INFO: Pod "pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd" satisfied condition "success or failure" Apr 1 14:19:17.726: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd container projected-secret-volume-test: STEP: delete the pod Apr 1 14:19:17.747: INFO: Waiting for pod pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd to disappear Apr 1 14:19:17.779: INFO: Pod pod-projected-secrets-93d9eec6-5304-4788-bc1f-784830edf0dd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:19:17.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4936" for this suite. Apr 1 14:19:23.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:19:23.871: INFO: namespace projected-4936 deletion completed in 6.088146665s • [SLOW TEST:10.219 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:19:23.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-2d2aec0b-5fab-4065-b81d-33eac2d04b75 STEP: Creating a pod to test consume configMaps Apr 1 14:19:23.967: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b" in namespace "projected-3840" to be "success or failure" Apr 1 14:19:23.998: INFO: Pod "pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.929305ms Apr 1 14:19:26.002: INFO: Pod "pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035640883s Apr 1 14:19:28.007: INFO: Pod "pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04003276s STEP: Saw pod success Apr 1 14:19:28.007: INFO: Pod "pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b" satisfied condition "success or failure" Apr 1 14:19:28.010: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b container projected-configmap-volume-test: STEP: delete the pod Apr 1 14:19:28.047: INFO: Waiting for pod pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b to disappear Apr 1 14:19:28.055: INFO: Pod pod-projected-configmaps-52c10cba-3727-426f-8ebf-b2c65666e57b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:19:28.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3840" for this suite. Apr 1 14:19:34.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:19:34.157: INFO: namespace projected-3840 deletion completed in 6.097541136s • [SLOW TEST:10.285 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:19:34.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-e1f0ab1f-ad21-40e6-84e7-8583489d5b2b STEP: Creating a pod to test consume configMaps Apr 1 14:19:34.258: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe" in namespace "projected-9614" to be "success or failure" Apr 1 14:19:34.271: INFO: Pod "pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe": Phase="Pending", Reason="", readiness=false. Elapsed: 13.012417ms Apr 1 14:19:36.275: INFO: Pod "pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017446694s Apr 1 14:19:38.279: INFO: Pod "pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021592829s STEP: Saw pod success Apr 1 14:19:38.279: INFO: Pod "pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe" satisfied condition "success or failure" Apr 1 14:19:38.282: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe container projected-configmap-volume-test: STEP: delete the pod Apr 1 14:19:38.323: INFO: Waiting for pod pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe to disappear Apr 1 14:19:38.327: INFO: Pod pod-projected-configmaps-96985940-d7b5-4ea8-9e35-e09cbed68afe no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:19:38.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9614" for this suite. Apr 1 14:19:44.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:19:44.420: INFO: namespace projected-9614 deletion completed in 6.090507231s • [SLOW TEST:10.263 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:19:44.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 1 14:19:44.966: INFO: Pod name wrapped-volume-race-a98f8375-2887-4b62-966b-ab477d1a3ae6: Found 0 pods out of 5 Apr 1 14:19:49.974: INFO: Pod name wrapped-volume-race-a98f8375-2887-4b62-966b-ab477d1a3ae6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a98f8375-2887-4b62-966b-ab477d1a3ae6 in namespace emptydir-wrapper-642, will wait for the garbage collector to delete the pods Apr 1 14:20:04.061: INFO: Deleting ReplicationController wrapped-volume-race-a98f8375-2887-4b62-966b-ab477d1a3ae6 took: 8.93661ms Apr 1 14:20:04.361: INFO: Terminating ReplicationController wrapped-volume-race-a98f8375-2887-4b62-966b-ab477d1a3ae6 pods took: 300.292324ms STEP: Creating RC which spawns configmap-volume pods Apr 1 14:20:42.695: INFO: Pod name wrapped-volume-race-3abd940e-741d-4858-99ee-c66d0297f38f: Found 0 pods out of 5 Apr 1 14:20:47.710: INFO: Pod name wrapped-volume-race-3abd940e-741d-4858-99ee-c66d0297f38f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3abd940e-741d-4858-99ee-c66d0297f38f in namespace emptydir-wrapper-642, will wait for the garbage collector to delete the pods Apr 1 14:21:01.791: INFO: Deleting ReplicationController wrapped-volume-race-3abd940e-741d-4858-99ee-c66d0297f38f took: 7.539168ms Apr 1 14:21:02.091: INFO: Terminating ReplicationController wrapped-volume-race-3abd940e-741d-4858-99ee-c66d0297f38f pods took: 300.244719ms STEP: Creating RC which spawns configmap-volume pods Apr 1 14:21:43.242: INFO: Pod name wrapped-volume-race-4536d4be-556e-4c87-a149-84b0bf923b94: Found 0 pods out of 5 Apr 1 14:21:48.249: INFO: Pod name wrapped-volume-race-4536d4be-556e-4c87-a149-84b0bf923b94: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4536d4be-556e-4c87-a149-84b0bf923b94 in namespace emptydir-wrapper-642, will wait for the garbage collector to delete the pods Apr 1 14:22:02.363: INFO: Deleting ReplicationController wrapped-volume-race-4536d4be-556e-4c87-a149-84b0bf923b94 took: 6.406154ms Apr 1 14:22:02.663: INFO: Terminating ReplicationController wrapped-volume-race-4536d4be-556e-4c87-a149-84b0bf923b94 pods took: 300.284923ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:22:43.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-642" for this suite. Apr 1 14:22:51.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:22:51.915: INFO: namespace emptydir-wrapper-642 deletion completed in 8.098550483s • [SLOW TEST:187.495 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:22:51.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 1 14:22:56.543: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bfb90024-ce4d-4186-a213-d840046d6cdc" Apr 1 14:22:56.543: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bfb90024-ce4d-4186-a213-d840046d6cdc" in namespace "pods-8341" to be "terminated due to deadline exceeded" Apr 1 14:22:56.600: INFO: Pod "pod-update-activedeadlineseconds-bfb90024-ce4d-4186-a213-d840046d6cdc": Phase="Running", Reason="", readiness=true. Elapsed: 56.567781ms Apr 1 14:22:58.604: INFO: Pod "pod-update-activedeadlineseconds-bfb90024-ce4d-4186-a213-d840046d6cdc": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.060738631s Apr 1 14:22:58.604: INFO: Pod "pod-update-activedeadlineseconds-bfb90024-ce4d-4186-a213-d840046d6cdc" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:22:58.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8341" for this suite. Apr 1 14:23:04.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:23:04.705: INFO: namespace pods-8341 deletion completed in 6.096612344s • [SLOW TEST:12.789 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:23:04.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 1 14:23:04.761: INFO: Waiting up to 5m0s for pod "pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06" in namespace "emptydir-5777" to be "success or failure" Apr 1 14:23:04.765: INFO: Pod "pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555494ms Apr 1 14:23:06.768: INFO: Pod "pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007078678s Apr 1 14:23:08.773: INFO: Pod "pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011209676s STEP: Saw pod success Apr 1 14:23:08.773: INFO: Pod "pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06" satisfied condition "success or failure" Apr 1 14:23:08.775: INFO: Trying to get logs from node iruya-worker pod pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06 container test-container: STEP: delete the pod Apr 1 14:23:08.822: INFO: Waiting for pod pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06 to disappear Apr 1 14:23:08.831: INFO: Pod pod-d1803cfb-b9b6-4289-92d7-2cbc8fe8ab06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:23:08.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5777" for this suite. Apr 1 14:23:14.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:23:14.924: INFO: namespace emptydir-5777 deletion completed in 6.089819449s • [SLOW TEST:10.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:23:14.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873 Apr 1 14:23:15.040: INFO: Pod name my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873: Found 0 pods out of 1 Apr 1 14:23:20.045: INFO: Pod name my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873: Found 1 pods out of 1 Apr 1 14:23:20.045: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873" are running Apr 1 14:23:20.048: INFO: Pod "my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873-tt6sf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:23:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:23:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:23:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:23:15 +0000 UTC Reason: Message:}]) Apr 1 14:23:20.048: INFO: Trying to dial the pod Apr 1 14:23:25.058: INFO: Controller my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873: Got expected result from replica 1 [my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873-tt6sf]: "my-hostname-basic-17267e18-a9a2-40ac-9ce6-872449bd4873-tt6sf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:23:25.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4630" for this suite. Apr 1 14:23:31.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:23:31.162: INFO: namespace replication-controller-4630 deletion completed in 6.101626204s • [SLOW TEST:16.239 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:23:31.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:23:31.252: INFO: Waiting up to 5m0s for pod "downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65" in namespace "projected-6988" to be "success or failure" Apr 1 14:23:31.261: INFO: Pod "downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65": Phase="Pending", Reason="", readiness=false. Elapsed: 9.696743ms Apr 1 14:23:33.265: INFO: Pod "downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013761439s Apr 1 14:23:35.270: INFO: Pod "downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017875798s STEP: Saw pod success Apr 1 14:23:35.270: INFO: Pod "downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65" satisfied condition "success or failure" Apr 1 14:23:35.273: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65 container client-container: STEP: delete the pod Apr 1 14:23:35.292: INFO: Waiting for pod downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65 to disappear Apr 1 14:23:35.297: INFO: Pod downwardapi-volume-919f8dfd-c115-4483-82f2-5c3a62d5cf65 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:23:35.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6988" for this suite. Apr 1 14:23:41.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:23:41.403: INFO: namespace projected-6988 deletion completed in 6.103031326s • [SLOW TEST:10.239 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:23:41.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 1 14:23:41.516: INFO: Waiting up to 5m0s for pod "pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d" in namespace "emptydir-6157" to be "success or failure" Apr 1 14:23:41.519: INFO: Pod "pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047227ms Apr 1 14:23:43.523: INFO: Pod "pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007004881s Apr 1 14:23:45.527: INFO: Pod "pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011751257s STEP: Saw pod success Apr 1 14:23:45.527: INFO: Pod "pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d" satisfied condition "success or failure" Apr 1 14:23:45.530: INFO: Trying to get logs from node iruya-worker2 pod pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d container test-container: STEP: delete the pod Apr 1 14:23:45.551: INFO: Waiting for pod pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d to disappear Apr 1 14:23:45.556: INFO: Pod pod-a0bf0c65-21b2-4789-89d0-056acbf3da9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:23:45.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6157" for this suite. Apr 1 14:23:51.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:23:51.651: INFO: namespace emptydir-6157 deletion completed in 6.092804687s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:23:51.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3503 I0401 14:23:51.731624 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3503, replica count: 1 I0401 14:23:52.782014 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 14:23:53.782234 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 14:23:54.782451 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0401 14:23:55.782745 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 1 14:23:55.930: INFO: Created: latency-svc-rpqgf Apr 1 14:23:55.934: INFO: Got endpoints: latency-svc-rpqgf [50.943101ms] Apr 1 14:23:55.967: INFO: Created: latency-svc-b8jsg Apr 1 14:23:55.981: INFO: Got endpoints: latency-svc-b8jsg [47.174714ms] Apr 1 14:23:56.000: INFO: Created: latency-svc-hbgqd Apr 1 14:23:56.012: INFO: Got endpoints: latency-svc-hbgqd [77.778024ms] Apr 1 14:23:56.079: INFO: Created: latency-svc-nttcc Apr 1 14:23:56.102: INFO: Got endpoints: latency-svc-nttcc [168.610783ms] Apr 1 14:23:56.103: INFO: Created: latency-svc-v5c6p Apr 1 14:23:56.126: INFO: Got endpoints: latency-svc-v5c6p [192.549828ms] Apr 1 14:23:56.157: INFO: Created: latency-svc-sppc9 Apr 1 14:23:56.167: INFO: Got endpoints: latency-svc-sppc9 [233.316719ms] Apr 1 14:23:56.210: INFO: Created: latency-svc-5znrq Apr 1 14:23:56.215: INFO: Got endpoints: latency-svc-5znrq [281.119512ms] Apr 1 14:23:56.234: INFO: Created: latency-svc-hc9k8 Apr 1 14:23:56.246: INFO: Got endpoints: latency-svc-hc9k8 [312.415487ms] Apr 1 14:23:56.271: INFO: Created: latency-svc-stjct Apr 1 14:23:56.283: INFO: Got endpoints: latency-svc-stjct [349.383131ms] Apr 1 14:23:56.355: INFO: Created: latency-svc-v2l4z Apr 1 14:23:56.379: INFO: Got endpoints: latency-svc-v2l4z [445.078847ms] Apr 1 14:23:56.380: INFO: Created: latency-svc-zm5wq Apr 1 14:23:56.391: INFO: Got endpoints: latency-svc-zm5wq [456.899883ms] Apr 1 14:23:56.414: INFO: Created: latency-svc-mjn5f Apr 1 14:23:56.426: INFO: Got endpoints: latency-svc-mjn5f [492.354343ms] Apr 1 14:23:56.451: INFO: Created: latency-svc-lfmxz Apr 1 14:23:56.510: INFO: Got endpoints: latency-svc-lfmxz [576.144229ms] Apr 1 14:23:56.512: INFO: Created: latency-svc-wmzww Apr 1 14:23:56.517: INFO: Got endpoints: latency-svc-wmzww [583.070244ms] Apr 1 14:23:56.535: INFO: Created: latency-svc-srghs Apr 1 14:23:56.547: INFO: Got endpoints: latency-svc-srghs [613.669212ms] Apr 1 14:23:56.565: INFO: Created: latency-svc-5rpnv Apr 1 14:23:56.578: INFO: Got endpoints: latency-svc-5rpnv [643.536001ms] Apr 1 14:23:56.595: INFO: Created: latency-svc-vglkl Apr 1 14:23:56.648: INFO: Got endpoints: latency-svc-vglkl [666.864482ms] Apr 1 14:23:56.703: INFO: Created: latency-svc-z6r6b Apr 1 14:23:56.726: INFO: Got endpoints: latency-svc-z6r6b [714.571435ms] Apr 1 14:23:56.804: INFO: Created: latency-svc-j59bz Apr 1 14:23:56.816: INFO: Got endpoints: latency-svc-j59bz [713.104342ms] Apr 1 14:23:56.850: INFO: Created: latency-svc-hx6mf Apr 1 14:23:56.855: INFO: Got endpoints: latency-svc-hx6mf [728.31566ms] Apr 1 14:23:56.876: INFO: Created: latency-svc-hhvct Apr 1 14:23:56.891: INFO: Got endpoints: latency-svc-hhvct [723.679407ms] Apr 1 14:23:56.947: INFO: Created: latency-svc-zfsvp Apr 1 14:23:56.972: INFO: Got endpoints: latency-svc-zfsvp [757.10361ms] Apr 1 14:23:56.973: INFO: Created: latency-svc-gvbzw Apr 1 14:23:56.987: INFO: Got endpoints: latency-svc-gvbzw [740.846218ms] Apr 1 14:23:57.008: INFO: Created: latency-svc-ptd7c Apr 1 14:23:57.024: INFO: Got endpoints: latency-svc-ptd7c [741.088597ms] Apr 1 14:23:57.044: INFO: Created: latency-svc-nsjqr Apr 1 14:23:57.079: INFO: Got endpoints: latency-svc-nsjqr [700.375673ms] Apr 1 14:23:57.098: INFO: Created: latency-svc-v55qd Apr 1 14:23:57.116: INFO: Got endpoints: latency-svc-v55qd [724.604047ms] Apr 1 14:23:57.134: INFO: Created: latency-svc-xrlwp Apr 1 14:23:57.145: INFO: Got endpoints: latency-svc-xrlwp [718.79292ms] Apr 1 14:23:57.223: INFO: Created: latency-svc-zj8wg Apr 1 14:23:57.248: INFO: Got endpoints: latency-svc-zj8wg [737.935607ms] Apr 1 14:23:57.249: INFO: Created: latency-svc-qjp7j Apr 1 14:23:57.272: INFO: Got endpoints: latency-svc-qjp7j [754.501523ms] Apr 1 14:23:57.303: INFO: Created: latency-svc-vqx8s Apr 1 14:23:57.314: INFO: Got endpoints: latency-svc-vqx8s [766.718102ms] Apr 1 14:23:57.356: INFO: Created: latency-svc-p82t5 Apr 1 14:23:57.368: INFO: Got endpoints: latency-svc-p82t5 [790.610026ms] Apr 1 14:23:57.398: INFO: Created: latency-svc-pgnww Apr 1 14:23:57.410: INFO: Got endpoints: latency-svc-pgnww [762.474628ms] Apr 1 14:23:57.434: INFO: Created: latency-svc-mghg8 Apr 1 14:23:57.510: INFO: Got endpoints: latency-svc-mghg8 [783.46593ms] Apr 1 14:23:57.511: INFO: Created: latency-svc-2ncp6 Apr 1 14:23:57.520: INFO: Got endpoints: latency-svc-2ncp6 [703.886107ms] Apr 1 14:23:57.542: INFO: Created: latency-svc-twmhs Apr 1 14:23:57.555: INFO: Got endpoints: latency-svc-twmhs [700.331179ms] Apr 1 14:23:57.578: INFO: Created: latency-svc-c79v8 Apr 1 14:23:57.592: INFO: Got endpoints: latency-svc-c79v8 [700.605949ms] Apr 1 14:23:57.648: INFO: Created: latency-svc-vmklv Apr 1 14:23:57.651: INFO: Got endpoints: latency-svc-vmklv [679.070546ms] Apr 1 14:23:57.698: INFO: Created: latency-svc-5xtqc Apr 1 14:23:57.712: INFO: Got endpoints: latency-svc-5xtqc [724.689767ms] Apr 1 14:23:57.728: INFO: Created: latency-svc-gmmgb Apr 1 14:23:57.742: INFO: Got endpoints: latency-svc-gmmgb [717.871312ms] Apr 1 14:23:57.788: INFO: Created: latency-svc-vsv9t Apr 1 14:23:57.803: INFO: Got endpoints: latency-svc-vsv9t [723.538006ms] Apr 1 14:23:57.843: INFO: Created: latency-svc-7fbf2 Apr 1 14:23:57.887: INFO: Got endpoints: latency-svc-7fbf2 [771.408882ms] Apr 1 14:23:57.926: INFO: Created: latency-svc-dm9wh Apr 1 14:23:57.941: INFO: Got endpoints: latency-svc-dm9wh [795.779039ms] Apr 1 14:23:57.981: INFO: Created: latency-svc-bvt5n Apr 1 14:23:58.031: INFO: Got endpoints: latency-svc-bvt5n [782.178772ms] Apr 1 14:23:58.052: INFO: Created: latency-svc-kcvcn Apr 1 14:23:58.082: INFO: Got endpoints: latency-svc-kcvcn [810.382068ms] Apr 1 14:23:58.119: INFO: Created: latency-svc-t7r5d Apr 1 14:23:58.156: INFO: Got endpoints: latency-svc-t7r5d [841.916142ms] Apr 1 14:23:58.172: INFO: Created: latency-svc-5mzxw Apr 1 14:23:58.188: INFO: Got endpoints: latency-svc-5mzxw [819.684825ms] Apr 1 14:23:58.208: INFO: Created: latency-svc-wsbm7 Apr 1 14:23:58.218: INFO: Got endpoints: latency-svc-wsbm7 [807.842651ms] Apr 1 14:23:58.239: INFO: Created: latency-svc-qjzh6 Apr 1 14:23:58.318: INFO: Got endpoints: latency-svc-qjzh6 [808.332772ms] Apr 1 14:23:58.341: INFO: Created: latency-svc-2l2x6 Apr 1 14:23:58.357: INFO: Got endpoints: latency-svc-2l2x6 [837.165297ms] Apr 1 14:23:58.376: INFO: Created: latency-svc-252z7 Apr 1 14:23:58.399: INFO: Got endpoints: latency-svc-252z7 [844.018893ms] Apr 1 14:23:58.463: INFO: Created: latency-svc-wjmft Apr 1 14:23:58.466: INFO: Got endpoints: latency-svc-wjmft [874.006527ms] Apr 1 14:23:58.491: INFO: Created: latency-svc-v7cqb Apr 1 14:23:58.501: INFO: Got endpoints: latency-svc-v7cqb [850.118337ms] Apr 1 14:23:58.520: INFO: Created: latency-svc-qjjhw Apr 1 14:23:58.532: INFO: Got endpoints: latency-svc-qjjhw [819.466589ms] Apr 1 14:23:58.550: INFO: Created: latency-svc-ddr4t Apr 1 14:23:58.600: INFO: Got endpoints: latency-svc-ddr4t [857.002565ms] Apr 1 14:23:58.605: INFO: Created: latency-svc-2tz8l Apr 1 14:23:58.616: INFO: Got endpoints: latency-svc-2tz8l [813.253299ms] Apr 1 14:23:58.640: INFO: Created: latency-svc-ssjpr Apr 1 14:23:58.664: INFO: Got endpoints: latency-svc-ssjpr [776.682235ms] Apr 1 14:23:58.688: INFO: Created: latency-svc-kxpsm Apr 1 14:23:58.743: INFO: Got endpoints: latency-svc-kxpsm [801.977392ms] Apr 1 14:23:58.746: INFO: Created: latency-svc-b76d4 Apr 1 14:23:58.803: INFO: Got endpoints: latency-svc-b76d4 [772.70126ms] Apr 1 14:23:58.875: INFO: Created: latency-svc-v6zhs Apr 1 14:23:58.879: INFO: Got endpoints: latency-svc-v6zhs [796.539867ms] Apr 1 14:23:58.935: INFO: Created: latency-svc-wgm4t Apr 1 14:23:58.949: INFO: Got endpoints: latency-svc-wgm4t [792.93634ms] Apr 1 14:23:58.970: INFO: Created: latency-svc-4rx2k Apr 1 14:23:59.001: INFO: Got endpoints: latency-svc-4rx2k [812.707762ms] Apr 1 14:23:59.024: INFO: Created: latency-svc-bkmsw Apr 1 14:23:59.048: INFO: Got endpoints: latency-svc-bkmsw [829.88703ms] Apr 1 14:23:59.072: INFO: Created: latency-svc-vfnqf Apr 1 14:23:59.082: INFO: Got endpoints: latency-svc-vfnqf [763.49764ms] Apr 1 14:23:59.151: INFO: Created: latency-svc-jzbn8 Apr 1 14:23:59.154: INFO: Got endpoints: latency-svc-jzbn8 [796.684323ms] Apr 1 14:23:59.180: INFO: Created: latency-svc-cf565 Apr 1 14:23:59.196: INFO: Got endpoints: latency-svc-cf565 [797.112234ms] Apr 1 14:23:59.216: INFO: Created: latency-svc-8zzwq Apr 1 14:23:59.233: INFO: Got endpoints: latency-svc-8zzwq [767.320462ms] Apr 1 14:23:59.289: INFO: Created: latency-svc-n5grv Apr 1 14:23:59.292: INFO: Got endpoints: latency-svc-n5grv [790.151797ms] Apr 1 14:23:59.342: INFO: Created: latency-svc-lpw8p Apr 1 14:23:59.353: INFO: Got endpoints: latency-svc-lpw8p [821.351695ms] Apr 1 14:23:59.372: INFO: Created: latency-svc-49nzd Apr 1 14:23:59.383: INFO: Got endpoints: latency-svc-49nzd [783.859019ms] Apr 1 14:23:59.432: INFO: Created: latency-svc-gqf98 Apr 1 14:23:59.450: INFO: Got endpoints: latency-svc-gqf98 [833.920981ms] Apr 1 14:23:59.475: INFO: Created: latency-svc-vpbxj Apr 1 14:23:59.486: INFO: Got endpoints: latency-svc-vpbxj [822.075921ms] Apr 1 14:23:59.504: INFO: Created: latency-svc-m6mmv Apr 1 14:23:59.516: INFO: Got endpoints: latency-svc-m6mmv [772.860112ms] Apr 1 14:23:59.576: INFO: Created: latency-svc-p5qh4 Apr 1 14:23:59.579: INFO: Got endpoints: latency-svc-p5qh4 [775.628338ms] Apr 1 14:23:59.613: INFO: Created: latency-svc-nxkjh Apr 1 14:23:59.648: INFO: Got endpoints: latency-svc-nxkjh [769.180202ms] Apr 1 14:23:59.714: INFO: Created: latency-svc-w9zcl Apr 1 14:23:59.717: INFO: Got endpoints: latency-svc-w9zcl [767.396679ms] Apr 1 14:23:59.744: INFO: Created: latency-svc-2wbb2 Apr 1 14:23:59.758: INFO: Got endpoints: latency-svc-2wbb2 [757.43363ms] Apr 1 14:23:59.775: INFO: Created: latency-svc-6f4z5 Apr 1 14:23:59.788: INFO: Got endpoints: latency-svc-6f4z5 [739.400528ms] Apr 1 14:23:59.804: INFO: Created: latency-svc-5gggf Apr 1 14:23:59.833: INFO: Got endpoints: latency-svc-5gggf [751.230041ms] Apr 1 14:23:59.864: INFO: Created: latency-svc-ddnlw Apr 1 14:23:59.878: INFO: Got endpoints: latency-svc-ddnlw [724.397905ms] Apr 1 14:23:59.895: INFO: Created: latency-svc-sgm69 Apr 1 14:23:59.909: INFO: Got endpoints: latency-svc-sgm69 [712.218339ms] Apr 1 14:23:59.930: INFO: Created: latency-svc-6sqdq Apr 1 14:24:00.031: INFO: Got endpoints: latency-svc-6sqdq [797.822798ms] Apr 1 14:24:00.044: INFO: Created: latency-svc-q2gc5 Apr 1 14:24:00.059: INFO: Got endpoints: latency-svc-q2gc5 [767.310347ms] Apr 1 14:24:00.093: INFO: Created: latency-svc-5jbtb Apr 1 14:24:00.109: INFO: Got endpoints: latency-svc-5jbtb [755.406838ms] Apr 1 14:24:00.181: INFO: Created: latency-svc-4prjd Apr 1 14:24:00.185: INFO: Got endpoints: latency-svc-4prjd [801.479099ms] Apr 1 14:24:00.219: INFO: Created: latency-svc-56qrs Apr 1 14:24:00.260: INFO: Got endpoints: latency-svc-56qrs [809.459962ms] Apr 1 14:24:00.336: INFO: Created: latency-svc-4stvm Apr 1 14:24:00.368: INFO: Got endpoints: latency-svc-4stvm [882.181279ms] Apr 1 14:24:00.369: INFO: Created: latency-svc-z9vtq Apr 1 14:24:00.384: INFO: Got endpoints: latency-svc-z9vtq [867.958245ms] Apr 1 14:24:00.405: INFO: Created: latency-svc-vvft7 Apr 1 14:24:00.420: INFO: Got endpoints: latency-svc-vvft7 [841.105237ms] Apr 1 14:24:00.480: INFO: Created: latency-svc-9f9dh Apr 1 14:24:00.483: INFO: Got endpoints: latency-svc-9f9dh [834.792133ms] Apr 1 14:24:00.536: INFO: Created: latency-svc-jx7sw Apr 1 14:24:00.553: INFO: Got endpoints: latency-svc-jx7sw [836.171179ms] Apr 1 14:24:00.572: INFO: Created: latency-svc-zwcrp Apr 1 14:24:00.606: INFO: Got endpoints: latency-svc-zwcrp [847.508377ms] Apr 1 14:24:00.632: INFO: Created: latency-svc-2vvmg Apr 1 14:24:00.643: INFO: Got endpoints: latency-svc-2vvmg [855.464958ms] Apr 1 14:24:00.662: INFO: Created: latency-svc-pjbvc Apr 1 14:24:00.686: INFO: Got endpoints: latency-svc-pjbvc [853.244909ms] Apr 1 14:24:00.744: INFO: Created: latency-svc-pffdh Apr 1 14:24:00.770: INFO: Got endpoints: latency-svc-pffdh [891.820373ms] Apr 1 14:24:00.771: INFO: Created: latency-svc-wkm42 Apr 1 14:24:00.782: INFO: Got endpoints: latency-svc-wkm42 [873.236764ms] Apr 1 14:24:00.800: INFO: Created: latency-svc-6p4w5 Apr 1 14:24:00.812: INFO: Got endpoints: latency-svc-6p4w5 [781.363108ms] Apr 1 14:24:00.842: INFO: Created: latency-svc-5wwp9 Apr 1 14:24:00.905: INFO: Got endpoints: latency-svc-5wwp9 [845.932279ms] Apr 1 14:24:00.927: INFO: Created: latency-svc-dwktl Apr 1 14:24:00.940: INFO: Got endpoints: latency-svc-dwktl [831.782133ms] Apr 1 14:24:00.962: INFO: Created: latency-svc-zjbsl Apr 1 14:24:00.976: INFO: Got endpoints: latency-svc-zjbsl [791.395371ms] Apr 1 14:24:00.999: INFO: Created: latency-svc-xmv6j Apr 1 14:24:01.043: INFO: Got endpoints: latency-svc-xmv6j [782.781118ms] Apr 1 14:24:01.059: INFO: Created: latency-svc-fspl9 Apr 1 14:24:01.082: INFO: Got endpoints: latency-svc-fspl9 [714.015319ms] Apr 1 14:24:01.125: INFO: Created: latency-svc-5826w Apr 1 14:24:01.198: INFO: Got endpoints: latency-svc-5826w [814.258531ms] Apr 1 14:24:01.220: INFO: Created: latency-svc-b5vw9 Apr 1 14:24:01.236: INFO: Got endpoints: latency-svc-b5vw9 [815.357127ms] Apr 1 14:24:01.257: INFO: Created: latency-svc-zzpcp Apr 1 14:24:01.272: INFO: Got endpoints: latency-svc-zzpcp [788.699634ms] Apr 1 14:24:01.292: INFO: Created: latency-svc-xltgt Apr 1 14:24:01.354: INFO: Got endpoints: latency-svc-xltgt [801.046033ms] Apr 1 14:24:01.357: INFO: Created: latency-svc-9k7rc Apr 1 14:24:01.362: INFO: Got endpoints: latency-svc-9k7rc [756.328708ms] Apr 1 14:24:01.383: INFO: Created: latency-svc-pjtgb Apr 1 14:24:01.399: INFO: Got endpoints: latency-svc-pjtgb [755.415148ms] Apr 1 14:24:01.419: INFO: Created: latency-svc-mszsj Apr 1 14:24:01.429: INFO: Got endpoints: latency-svc-mszsj [742.33963ms] Apr 1 14:24:01.448: INFO: Created: latency-svc-wvrsr Apr 1 14:24:01.505: INFO: Got endpoints: latency-svc-wvrsr [734.63468ms] Apr 1 14:24:01.532: INFO: Created: latency-svc-4zjj9 Apr 1 14:24:01.543: INFO: Got endpoints: latency-svc-4zjj9 [761.252168ms] Apr 1 14:24:01.636: INFO: Created: latency-svc-dghzs Apr 1 14:24:01.638: INFO: Got endpoints: latency-svc-dghzs [825.882035ms] Apr 1 14:24:01.659: INFO: Created: latency-svc-lhq29 Apr 1 14:24:01.670: INFO: Got endpoints: latency-svc-lhq29 [764.883633ms] Apr 1 14:24:01.701: INFO: Created: latency-svc-krhjl Apr 1 14:24:01.724: INFO: Got endpoints: latency-svc-krhjl [85.704214ms] Apr 1 14:24:01.768: INFO: Created: latency-svc-9v6xb Apr 1 14:24:01.771: INFO: Got endpoints: latency-svc-9v6xb [830.925981ms] Apr 1 14:24:01.790: INFO: Created: latency-svc-5g99q Apr 1 14:24:01.803: INFO: Got endpoints: latency-svc-5g99q [826.290806ms] Apr 1 14:24:01.826: INFO: Created: latency-svc-bcdk6 Apr 1 14:24:01.845: INFO: Got endpoints: latency-svc-bcdk6 [802.392725ms] Apr 1 14:24:01.863: INFO: Created: latency-svc-7t84x Apr 1 14:24:01.905: INFO: Got endpoints: latency-svc-7t84x [822.452708ms] Apr 1 14:24:01.916: INFO: Created: latency-svc-ll26q Apr 1 14:24:01.930: INFO: Got endpoints: latency-svc-ll26q [731.413089ms] Apr 1 14:24:01.959: INFO: Created: latency-svc-l7znl Apr 1 14:24:01.972: INFO: Got endpoints: latency-svc-l7znl [736.255665ms] Apr 1 14:24:01.988: INFO: Created: latency-svc-ds8v7 Apr 1 14:24:02.044: INFO: Got endpoints: latency-svc-ds8v7 [772.480948ms] Apr 1 14:24:02.079: INFO: Created: latency-svc-fbh5f Apr 1 14:24:02.086: INFO: Got endpoints: latency-svc-fbh5f [732.16993ms] Apr 1 14:24:02.108: INFO: Created: latency-svc-wnrj5 Apr 1 14:24:02.117: INFO: Got endpoints: latency-svc-wnrj5 [754.585374ms] Apr 1 14:24:02.157: INFO: Created: latency-svc-tcrlp Apr 1 14:24:02.160: INFO: Got endpoints: latency-svc-tcrlp [761.717821ms] Apr 1 14:24:02.193: INFO: Created: latency-svc-j2v6d Apr 1 14:24:02.222: INFO: Got endpoints: latency-svc-j2v6d [793.458513ms] Apr 1 14:24:02.253: INFO: Created: latency-svc-z75pw Apr 1 14:24:02.288: INFO: Got endpoints: latency-svc-z75pw [783.330315ms] Apr 1 14:24:02.300: INFO: Created: latency-svc-cbzq7 Apr 1 14:24:02.316: INFO: Got endpoints: latency-svc-cbzq7 [772.191722ms] Apr 1 14:24:02.343: INFO: Created: latency-svc-z2vcs Apr 1 14:24:02.359: INFO: Got endpoints: latency-svc-z2vcs [688.96696ms] Apr 1 14:24:02.378: INFO: Created: latency-svc-lvhjb Apr 1 14:24:02.426: INFO: Got endpoints: latency-svc-lvhjb [701.899213ms] Apr 1 14:24:02.438: INFO: Created: latency-svc-qc8c6 Apr 1 14:24:02.456: INFO: Got endpoints: latency-svc-qc8c6 [684.203635ms] Apr 1 14:24:02.486: INFO: Created: latency-svc-r2p4m Apr 1 14:24:02.503: INFO: Got endpoints: latency-svc-r2p4m [700.261214ms] Apr 1 14:24:02.522: INFO: Created: latency-svc-4s5w8 Apr 1 14:24:02.563: INFO: Got endpoints: latency-svc-4s5w8 [718.294105ms] Apr 1 14:24:02.576: INFO: Created: latency-svc-sjgv6 Apr 1 14:24:02.588: INFO: Got endpoints: latency-svc-sjgv6 [682.859383ms] Apr 1 14:24:02.606: INFO: Created: latency-svc-b9hpb Apr 1 14:24:02.618: INFO: Got endpoints: latency-svc-b9hpb [688.133562ms] Apr 1 14:24:02.636: INFO: Created: latency-svc-zsclq Apr 1 14:24:02.660: INFO: Got endpoints: latency-svc-zsclq [688.277336ms] Apr 1 14:24:02.726: INFO: Created: latency-svc-pcxkt Apr 1 14:24:02.732: INFO: Got endpoints: latency-svc-pcxkt [688.026771ms] Apr 1 14:24:02.768: INFO: Created: latency-svc-l7krk Apr 1 14:24:02.811: INFO: Got endpoints: latency-svc-l7krk [725.092458ms] Apr 1 14:24:02.870: INFO: Created: latency-svc-hq9ts Apr 1 14:24:02.873: INFO: Got endpoints: latency-svc-hq9ts [755.638961ms] Apr 1 14:24:02.894: INFO: Created: latency-svc-6hwhr Apr 1 14:24:02.907: INFO: Got endpoints: latency-svc-6hwhr [746.922966ms] Apr 1 14:24:02.925: INFO: Created: latency-svc-sjsfg Apr 1 14:24:02.937: INFO: Got endpoints: latency-svc-sjsfg [715.057601ms] Apr 1 14:24:02.954: INFO: Created: latency-svc-xqzw8 Apr 1 14:24:02.968: INFO: Got endpoints: latency-svc-xqzw8 [679.696385ms] Apr 1 14:24:03.025: INFO: Created: latency-svc-qqbk6 Apr 1 14:24:03.050: INFO: Got endpoints: latency-svc-qqbk6 [734.354345ms] Apr 1 14:24:03.051: INFO: Created: latency-svc-gxwzd Apr 1 14:24:03.058: INFO: Got endpoints: latency-svc-gxwzd [698.9652ms] Apr 1 14:24:03.074: INFO: Created: latency-svc-cc7kw Apr 1 14:24:03.084: INFO: Got endpoints: latency-svc-cc7kw [657.81251ms] Apr 1 14:24:03.110: INFO: Created: latency-svc-cphsp Apr 1 14:24:03.174: INFO: Got endpoints: latency-svc-cphsp [718.761161ms] Apr 1 14:24:03.200: INFO: Created: latency-svc-4fglz Apr 1 14:24:03.224: INFO: Got endpoints: latency-svc-4fglz [721.012262ms] Apr 1 14:24:03.248: INFO: Created: latency-svc-rhhqt Apr 1 14:24:03.257: INFO: Got endpoints: latency-svc-rhhqt [693.824425ms] Apr 1 14:24:03.301: INFO: Created: latency-svc-zd5rh Apr 1 14:24:03.303: INFO: Got endpoints: latency-svc-zd5rh [714.94973ms] Apr 1 14:24:03.326: INFO: Created: latency-svc-c4wxs Apr 1 14:24:03.342: INFO: Got endpoints: latency-svc-c4wxs [724.037954ms] Apr 1 14:24:03.362: INFO: Created: latency-svc-hfkvm Apr 1 14:24:03.379: INFO: Got endpoints: latency-svc-hfkvm [718.29879ms] Apr 1 14:24:03.399: INFO: Created: latency-svc-xd2kj Apr 1 14:24:03.432: INFO: Got endpoints: latency-svc-xd2kj [699.300104ms] Apr 1 14:24:03.446: INFO: Created: latency-svc-chpwx Apr 1 14:24:03.463: INFO: Got endpoints: latency-svc-chpwx [651.649048ms] Apr 1 14:24:03.495: INFO: Created: latency-svc-mt2sv Apr 1 14:24:03.505: INFO: Got endpoints: latency-svc-mt2sv [632.594494ms] Apr 1 14:24:03.530: INFO: Created: latency-svc-79jv4 Apr 1 14:24:03.564: INFO: Got endpoints: latency-svc-79jv4 [656.442098ms] Apr 1 14:24:03.578: INFO: Created: latency-svc-d96n4 Apr 1 14:24:03.590: INFO: Got endpoints: latency-svc-d96n4 [652.673874ms] Apr 1 14:24:03.608: INFO: Created: latency-svc-kspsj Apr 1 14:24:03.650: INFO: Got endpoints: latency-svc-kspsj [682.345222ms] Apr 1 14:24:03.737: INFO: Created: latency-svc-8cbxw Apr 1 14:24:03.753: INFO: Got endpoints: latency-svc-8cbxw [702.517937ms] Apr 1 14:24:03.801: INFO: Created: latency-svc-pbkkg Apr 1 14:24:03.825: INFO: Got endpoints: latency-svc-pbkkg [766.959138ms] Apr 1 14:24:03.906: INFO: Created: latency-svc-h84vk Apr 1 14:24:03.921: INFO: Got endpoints: latency-svc-h84vk [836.831059ms] Apr 1 14:24:03.994: INFO: Created: latency-svc-xs9rx Apr 1 14:24:04.055: INFO: Got endpoints: latency-svc-xs9rx [880.324223ms] Apr 1 14:24:04.059: INFO: Created: latency-svc-z85cf Apr 1 14:24:04.083: INFO: Got endpoints: latency-svc-z85cf [858.982256ms] Apr 1 14:24:04.126: INFO: Created: latency-svc-kwd5x Apr 1 14:24:04.210: INFO: Got endpoints: latency-svc-kwd5x [953.015562ms] Apr 1 14:24:04.257: INFO: Created: latency-svc-gjlgt Apr 1 14:24:04.276: INFO: Got endpoints: latency-svc-gjlgt [973.525664ms] Apr 1 14:24:04.306: INFO: Created: latency-svc-5w9ld Apr 1 14:24:04.348: INFO: Got endpoints: latency-svc-5w9ld [1.005712226s] Apr 1 14:24:04.360: INFO: Created: latency-svc-xf8kg Apr 1 14:24:04.390: INFO: Got endpoints: latency-svc-xf8kg [1.011593326s] Apr 1 14:24:04.407: INFO: Created: latency-svc-lzgrh Apr 1 14:24:04.415: INFO: Got endpoints: latency-svc-lzgrh [982.843719ms] Apr 1 14:24:04.432: INFO: Created: latency-svc-kjd8z Apr 1 14:24:04.486: INFO: Got endpoints: latency-svc-kjd8z [1.022821242s] Apr 1 14:24:04.498: INFO: Created: latency-svc-8kwdv Apr 1 14:24:04.511: INFO: Got endpoints: latency-svc-8kwdv [1.005983711s] Apr 1 14:24:04.527: INFO: Created: latency-svc-xv8w8 Apr 1 14:24:04.541: INFO: Got endpoints: latency-svc-xv8w8 [977.381013ms] Apr 1 14:24:04.558: INFO: Created: latency-svc-gll2t Apr 1 14:24:04.572: INFO: Got endpoints: latency-svc-gll2t [981.55963ms] Apr 1 14:24:04.618: INFO: Created: latency-svc-m7rkb Apr 1 14:24:04.621: INFO: Got endpoints: latency-svc-m7rkb [970.917253ms] Apr 1 14:24:04.655: INFO: Created: latency-svc-mj2l6 Apr 1 14:24:04.667: INFO: Got endpoints: latency-svc-mj2l6 [914.210912ms] Apr 1 14:24:04.685: INFO: Created: latency-svc-swmvz Apr 1 14:24:04.714: INFO: Got endpoints: latency-svc-swmvz [889.499849ms] Apr 1 14:24:04.786: INFO: Created: latency-svc-r4rvj Apr 1 14:24:04.817: INFO: Got endpoints: latency-svc-r4rvj [896.424788ms] Apr 1 14:24:04.840: INFO: Created: latency-svc-bwjrt Apr 1 14:24:04.853: INFO: Got endpoints: latency-svc-bwjrt [798.201123ms] Apr 1 14:24:04.905: INFO: Created: latency-svc-7mgms Apr 1 14:24:04.913: INFO: Got endpoints: latency-svc-7mgms [830.067448ms] Apr 1 14:24:04.935: INFO: Created: latency-svc-bnsf7 Apr 1 14:24:04.950: INFO: Got endpoints: latency-svc-bnsf7 [739.442726ms] Apr 1 14:24:04.972: INFO: Created: latency-svc-bkmwk Apr 1 14:24:04.980: INFO: Got endpoints: latency-svc-bkmwk [703.735681ms] Apr 1 14:24:05.002: INFO: Created: latency-svc-jtp9s Apr 1 14:24:05.067: INFO: Got endpoints: latency-svc-jtp9s [718.710614ms] Apr 1 14:24:05.069: INFO: Created: latency-svc-dc95l Apr 1 14:24:05.077: INFO: Got endpoints: latency-svc-dc95l [686.785446ms] Apr 1 14:24:05.140: INFO: Created: latency-svc-c95c9 Apr 1 14:24:05.155: INFO: Got endpoints: latency-svc-c95c9 [740.588205ms] Apr 1 14:24:05.205: INFO: Created: latency-svc-p2dv6 Apr 1 14:24:05.209: INFO: Got endpoints: latency-svc-p2dv6 [723.502387ms] Apr 1 14:24:05.229: INFO: Created: latency-svc-pkb9n Apr 1 14:24:05.253: INFO: Got endpoints: latency-svc-pkb9n [741.828812ms] Apr 1 14:24:05.283: INFO: Created: latency-svc-8bsff Apr 1 14:24:05.300: INFO: Got endpoints: latency-svc-8bsff [758.625276ms] Apr 1 14:24:05.354: INFO: Created: latency-svc-8mrt8 Apr 1 14:24:05.373: INFO: Got endpoints: latency-svc-8mrt8 [801.392947ms] Apr 1 14:24:05.398: INFO: Created: latency-svc-lhgkk Apr 1 14:24:05.408: INFO: Got endpoints: latency-svc-lhgkk [786.861431ms] Apr 1 14:24:05.427: INFO: Created: latency-svc-r6lkp Apr 1 14:24:05.439: INFO: Got endpoints: latency-svc-r6lkp [771.459581ms] Apr 1 14:24:05.492: INFO: Created: latency-svc-4mkws Apr 1 14:24:05.495: INFO: Got endpoints: latency-svc-4mkws [780.739289ms] Apr 1 14:24:05.523: INFO: Created: latency-svc-lfzht Apr 1 14:24:05.536: INFO: Got endpoints: latency-svc-lfzht [718.526612ms] Apr 1 14:24:05.554: INFO: Created: latency-svc-x8f52 Apr 1 14:24:05.566: INFO: Got endpoints: latency-svc-x8f52 [712.609102ms] Apr 1 14:24:05.583: INFO: Created: latency-svc-j6vgl Apr 1 14:24:05.623: INFO: Got endpoints: latency-svc-j6vgl [710.007657ms] Apr 1 14:24:05.627: INFO: Created: latency-svc-smkmm Apr 1 14:24:05.638: INFO: Got endpoints: latency-svc-smkmm [688.136018ms] Apr 1 14:24:05.662: INFO: Created: latency-svc-h89rb Apr 1 14:24:05.674: INFO: Got endpoints: latency-svc-h89rb [694.298056ms] Apr 1 14:24:05.703: INFO: Created: latency-svc-ws2cm Apr 1 14:24:05.717: INFO: Got endpoints: latency-svc-ws2cm [650.130373ms] Apr 1 14:24:05.764: INFO: Created: latency-svc-c86p7 Apr 1 14:24:05.789: INFO: Got endpoints: latency-svc-c86p7 [712.277517ms] Apr 1 14:24:05.811: INFO: Created: latency-svc-xbwkm Apr 1 14:24:05.826: INFO: Got endpoints: latency-svc-xbwkm [670.540071ms] Apr 1 14:24:05.848: INFO: Created: latency-svc-xhdp9 Apr 1 14:24:05.893: INFO: Got endpoints: latency-svc-xhdp9 [683.627952ms] Apr 1 14:24:05.919: INFO: Created: latency-svc-r54zh Apr 1 14:24:05.934: INFO: Got endpoints: latency-svc-r54zh [680.902148ms] Apr 1 14:24:05.967: INFO: Created: latency-svc-hxqvd Apr 1 14:24:06.049: INFO: Got endpoints: latency-svc-hxqvd [748.713916ms] Apr 1 14:24:06.051: INFO: Created: latency-svc-5d7bk Apr 1 14:24:06.063: INFO: Got endpoints: latency-svc-5d7bk [689.527839ms] Apr 1 14:24:06.099: INFO: Created: latency-svc-7m5v8 Apr 1 14:24:06.115: INFO: Got endpoints: latency-svc-7m5v8 [706.398621ms] Apr 1 14:24:06.193: INFO: Created: latency-svc-xw5w2 Apr 1 14:24:06.195: INFO: Got endpoints: latency-svc-xw5w2 [756.240488ms] Apr 1 14:24:06.195: INFO: Latencies: [47.174714ms 77.778024ms 85.704214ms 168.610783ms 192.549828ms 233.316719ms 281.119512ms 312.415487ms 349.383131ms 445.078847ms 456.899883ms 492.354343ms 576.144229ms 583.070244ms 613.669212ms 632.594494ms 643.536001ms 650.130373ms 651.649048ms 652.673874ms 656.442098ms 657.81251ms 666.864482ms 670.540071ms 679.070546ms 679.696385ms 680.902148ms 682.345222ms 682.859383ms 683.627952ms 684.203635ms 686.785446ms 688.026771ms 688.133562ms 688.136018ms 688.277336ms 688.96696ms 689.527839ms 693.824425ms 694.298056ms 698.9652ms 699.300104ms 700.261214ms 700.331179ms 700.375673ms 700.605949ms 701.899213ms 702.517937ms 703.735681ms 703.886107ms 706.398621ms 710.007657ms 712.218339ms 712.277517ms 712.609102ms 713.104342ms 714.015319ms 714.571435ms 714.94973ms 715.057601ms 717.871312ms 718.294105ms 718.29879ms 718.526612ms 718.710614ms 718.761161ms 718.79292ms 721.012262ms 723.502387ms 723.538006ms 723.679407ms 724.037954ms 724.397905ms 724.604047ms 724.689767ms 725.092458ms 728.31566ms 731.413089ms 732.16993ms 734.354345ms 734.63468ms 736.255665ms 737.935607ms 739.400528ms 739.442726ms 740.588205ms 740.846218ms 741.088597ms 741.828812ms 742.33963ms 746.922966ms 748.713916ms 751.230041ms 754.501523ms 754.585374ms 755.406838ms 755.415148ms 755.638961ms 756.240488ms 756.328708ms 757.10361ms 757.43363ms 758.625276ms 761.252168ms 761.717821ms 762.474628ms 763.49764ms 764.883633ms 766.718102ms 766.959138ms 767.310347ms 767.320462ms 767.396679ms 769.180202ms 771.408882ms 771.459581ms 772.191722ms 772.480948ms 772.70126ms 772.860112ms 775.628338ms 776.682235ms 780.739289ms 781.363108ms 782.178772ms 782.781118ms 783.330315ms 783.46593ms 783.859019ms 786.861431ms 788.699634ms 790.151797ms 790.610026ms 791.395371ms 792.93634ms 793.458513ms 795.779039ms 796.539867ms 796.684323ms 797.112234ms 797.822798ms 798.201123ms 801.046033ms 801.392947ms 801.479099ms 801.977392ms 802.392725ms 807.842651ms 808.332772ms 809.459962ms 810.382068ms 812.707762ms 813.253299ms 814.258531ms 815.357127ms 819.466589ms 819.684825ms 821.351695ms 822.075921ms 822.452708ms 825.882035ms 826.290806ms 829.88703ms 830.067448ms 830.925981ms 831.782133ms 833.920981ms 834.792133ms 836.171179ms 836.831059ms 837.165297ms 841.105237ms 841.916142ms 844.018893ms 845.932279ms 847.508377ms 850.118337ms 853.244909ms 855.464958ms 857.002565ms 858.982256ms 867.958245ms 873.236764ms 874.006527ms 880.324223ms 882.181279ms 889.499849ms 891.820373ms 896.424788ms 914.210912ms 953.015562ms 970.917253ms 973.525664ms 977.381013ms 981.55963ms 982.843719ms 1.005712226s 1.005983711s 1.011593326s 1.022821242s] Apr 1 14:24:06.195: INFO: 50 %ile: 757.10361ms Apr 1 14:24:06.195: INFO: 90 %ile: 858.982256ms Apr 1 14:24:06.195: INFO: 99 %ile: 1.011593326s Apr 1 14:24:06.195: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:24:06.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3503" for this suite. Apr 1 14:24:26.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:24:26.317: INFO: namespace svc-latency-3503 deletion completed in 20.11194955s • [SLOW TEST:34.665 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:24:26.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:24:26.422: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 1 14:24:31.427: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 1 14:24:31.427: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 1 14:24:33.430: INFO: Creating deployment "test-rollover-deployment" Apr 1 14:24:33.437: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 1 14:24:35.443: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 1 14:24:35.453: INFO: Ensure that both replica sets have 1 created replica Apr 1 14:24:35.459: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 1 14:24:35.465: INFO: Updating deployment test-rollover-deployment Apr 1 14:24:35.465: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 1 14:24:37.547: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 1 14:24:37.554: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 1 14:24:37.559: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:37.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347875, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:39.568: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:39.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347878, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:41.567: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:41.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347878, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:43.589: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:43.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347878, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:45.566: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:45.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347878, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:47.567: INFO: all replica sets need to contain the pod-template-hash label Apr 1 14:24:47.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347878, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721347873, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:24:49.566: INFO: Apr 1 14:24:49.566: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 1 14:24:49.575: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3882,SelfLink:/apis/apps/v1/namespaces/deployment-3882/deployments/test-rollover-deployment,UID:86f75423-e7f1-47c1-b298-30f69f2322ce,ResourceVersion:3049653,Generation:2,CreationTimestamp:2020-04-01 14:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-01 14:24:33 +0000 UTC 2020-04-01 14:24:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-01 14:24:49 +0000 UTC 2020-04-01 14:24:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 1 14:24:49.578: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3882,SelfLink:/apis/apps/v1/namespaces/deployment-3882/replicasets/test-rollover-deployment-854595fc44,UID:24b8c92f-fc9e-4765-80ee-a71df961d36b,ResourceVersion:3049641,Generation:2,CreationTimestamp:2020-04-01 14:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 86f75423-e7f1-47c1-b298-30f69f2322ce 0xc002697ab7 0xc002697ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 1 14:24:49.578: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 1 14:24:49.578: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3882,SelfLink:/apis/apps/v1/namespaces/deployment-3882/replicasets/test-rollover-controller,UID:94971fa9-2a42-40e8-a974-1fc3d7e57448,ResourceVersion:3049651,Generation:2,CreationTimestamp:2020-04-01 14:24:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 86f75423-e7f1-47c1-b298-30f69f2322ce 0xc0026979e7 0xc0026979e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:24:49.578: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3882,SelfLink:/apis/apps/v1/namespaces/deployment-3882/replicasets/test-rollover-deployment-9b8b997cf,UID:8a41c432-c5ab-4b0c-a47e-6c51fe989b45,ResourceVersion:3049601,Generation:2,CreationTimestamp:2020-04-01 14:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 86f75423-e7f1-47c1-b298-30f69f2322ce 0xc002697b80 0xc002697b81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:24:49.581: INFO: Pod "test-rollover-deployment-854595fc44-gfg8c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-gfg8c,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3882,SelfLink:/api/v1/namespaces/deployment-3882/pods/test-rollover-deployment-854595fc44-gfg8c,UID:1ed0f124-6ada-4427-8318-211cdee0a734,ResourceVersion:3049620,Generation:0,CreationTimestamp:2020-04-01 14:24:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 24b8c92f-fc9e-4765-80ee-a71df961d36b 0xc003158c37 0xc003158c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rnrzj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rnrzj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rnrzj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003158cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003158cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:24:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:24:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:24:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:24:35 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.7,StartTime:2020-04-01 14:24:35 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-01 14:24:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://205398835bed4bd7be2e3dfe9f807d7840cbafa4d61a527d1b8455c4947007ba}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:24:49.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3882" for this suite. Apr 1 14:24:57.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:24:57.716: INFO: namespace deployment-3882 deletion completed in 8.132382651s • [SLOW TEST:31.399 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:24:57.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-f350a36a-0aee-4108-9b8c-daadd151fa0e STEP: Creating a pod to test consume secrets Apr 1 14:24:57.819: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed" in namespace "projected-8199" to be "success or failure" Apr 1 14:24:57.831: INFO: Pod "pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed": Phase="Pending", Reason="", readiness=false. Elapsed: 12.064688ms Apr 1 14:24:59.835: INFO: Pod "pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016257977s Apr 1 14:25:01.840: INFO: Pod "pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020985647s STEP: Saw pod success Apr 1 14:25:01.840: INFO: Pod "pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed" satisfied condition "success or failure" Apr 1 14:25:01.843: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed container projected-secret-volume-test: STEP: delete the pod Apr 1 14:25:01.876: INFO: Waiting for pod pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed to disappear Apr 1 14:25:01.891: INFO: Pod pod-projected-secrets-ca858a6e-b6fc-4655-8927-9d07eb69faed no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:25:01.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8199" for this suite. Apr 1 14:25:07.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:25:07.988: INFO: namespace projected-8199 deletion completed in 6.093776505s • [SLOW TEST:10.271 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:25:07.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 1 14:25:08.584: INFO: created pod pod-service-account-defaultsa Apr 1 14:25:08.584: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 1 14:25:08.619: INFO: created pod pod-service-account-mountsa Apr 1 14:25:08.619: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 1 14:25:08.631: INFO: created pod pod-service-account-nomountsa Apr 1 14:25:08.631: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 1 14:25:08.658: INFO: created pod pod-service-account-defaultsa-mountspec Apr 1 14:25:08.658: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 1 14:25:08.698: INFO: created pod pod-service-account-mountsa-mountspec Apr 1 14:25:08.698: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 1 14:25:08.759: INFO: created pod pod-service-account-nomountsa-mountspec Apr 1 14:25:08.759: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 1 14:25:08.795: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 1 14:25:08.795: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 1 14:25:08.808: INFO: created pod pod-service-account-mountsa-nomountspec Apr 1 14:25:08.808: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 1 14:25:08.825: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 1 14:25:08.825: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:25:08.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3899" for this suite. Apr 1 14:25:34.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:25:35.015: INFO: namespace svcaccounts-3899 deletion completed in 26.122183512s • [SLOW TEST:27.026 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:25:35.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-4d0549a9-03de-4ebc-bf4b-1a45844f99ef STEP: Creating a pod to test consume secrets Apr 1 14:25:35.152: INFO: Waiting up to 5m0s for pod "pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a" in namespace "secrets-9475" to be "success or failure" Apr 1 14:25:35.168: INFO: Pod "pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.848038ms Apr 1 14:25:37.172: INFO: Pod "pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020252466s Apr 1 14:25:39.176: INFO: Pod "pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023778022s STEP: Saw pod success Apr 1 14:25:39.176: INFO: Pod "pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a" satisfied condition "success or failure" Apr 1 14:25:39.178: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a container secret-volume-test: STEP: delete the pod Apr 1 14:25:39.200: INFO: Waiting for pod pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a to disappear Apr 1 14:25:39.217: INFO: Pod pod-secrets-f275ca03-5215-45c1-8d1f-8ab9886a580a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:25:39.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9475" for this suite. Apr 1 14:25:45.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:25:45.345: INFO: namespace secrets-9475 deletion completed in 6.123076726s • [SLOW TEST:10.330 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:25:45.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:25:45.392: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:25:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1257" for this suite. Apr 1 14:26:27.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:26:27.659: INFO: namespace pods-1257 deletion completed in 38.111047811s • [SLOW TEST:42.314 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:26:27.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:26:27.746: INFO: Creating deployment "nginx-deployment" Apr 1 14:26:27.782: INFO: Waiting for observed generation 1 Apr 1 14:26:29.813: INFO: Waiting for all required pods to come up Apr 1 14:26:29.818: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 1 14:26:37.832: INFO: Waiting for deployment "nginx-deployment" to complete Apr 1 14:26:37.836: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 1 14:26:37.843: INFO: Updating deployment nginx-deployment Apr 1 14:26:37.843: INFO: Waiting for observed generation 2 Apr 1 14:26:39.872: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 1 14:26:39.875: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 1 14:26:39.877: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 1 14:26:39.884: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 1 14:26:39.884: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 1 14:26:39.886: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 1 14:26:39.889: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 1 14:26:39.889: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 1 14:26:39.893: INFO: Updating deployment nginx-deployment Apr 1 14:26:39.893: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 1 14:26:40.022: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 1 14:26:40.047: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 1 14:26:40.287: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-7960,SelfLink:/apis/apps/v1/namespaces/deployment-7960/deployments/nginx-deployment,UID:7655c239-e668-4576-9249-eb31e740bf00,ResourceVersion:3050267,Generation:3,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-01 14:26:38 +0000 UTC 2020-04-01 14:26:27 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-01 14:26:40 +0000 UTC 2020-04-01 14:26:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 1 14:26:40.432: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-7960,SelfLink:/apis/apps/v1/namespaces/deployment-7960/replicasets/nginx-deployment-55fb7cb77f,UID:950bd450-678a-44f7-9fb5-5c37e1f9e912,ResourceVersion:3050313,Generation:3,CreationTimestamp:2020-04-01 14:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7655c239-e668-4576-9249-eb31e740bf00 0xc002b36d37 0xc002b36d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:26:40.432: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 1 14:26:40.433: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-7960,SelfLink:/apis/apps/v1/namespaces/deployment-7960/replicasets/nginx-deployment-7b8c6f4498,UID:4b9b107b-95ad-4ddc-8dae-4d0834b60670,ResourceVersion:3050304,Generation:3,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7655c239-e668-4576-9249-eb31e740bf00 0xc002b36e07 0xc002b36e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 1 14:26:40.590: INFO: Pod "nginx-deployment-55fb7cb77f-4hwvr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4hwvr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-4hwvr,UID:076d69cd-dca0-49d1-b111-5cfddaa19090,ResourceVersion:3050243,Generation:0,CreationTimestamp:2020-04-01 14:26:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002530ac7 0xc002530ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002530b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002530b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:26:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.590: INFO: Pod "nginx-deployment-55fb7cb77f-5ftxb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5ftxb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-5ftxb,UID:968184b0-b0da-4a2c-97db-cc569fab2d6a,ResourceVersion:3050235,Generation:0,CreationTimestamp:2020-04-01 14:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002530c30 0xc002530c31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002530cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002530cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-01 14:26:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.590: INFO: Pod "nginx-deployment-55fb7cb77f-86666" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-86666,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-86666,UID:920eecb4-1e69-4f39-a58d-97db59ec9773,ResourceVersion:3050295,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002530da0 0xc002530da1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002530e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002530e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.590: INFO: Pod "nginx-deployment-55fb7cb77f-8gqmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8gqmc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-8gqmc,UID:4946192f-7572-444f-bbef-1b6f9da00c74,ResourceVersion:3050305,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002530ec7 0xc002530ec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002530f40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002530f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-8prnw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8prnw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-8prnw,UID:4699b565-09cf-412c-8ee8-67b858a83eb3,ResourceVersion:3050228,Generation:0,CreationTimestamp:2020-04-01 14:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002530fe7 0xc002530fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-01 14:26:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-9n86s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9n86s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-9n86s,UID:196b3ad3-181e-4c81-9df8-6c43ca752854,ResourceVersion:3050302,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531150 0xc002531151}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025311d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025311f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-9nc5k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9nc5k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-9nc5k,UID:85f143b2-22b7-46e1-95e5-75e3809ac15e,ResourceVersion:3050317,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531277 0xc002531278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025312f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-01 14:26:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-dlmf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dlmf5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-dlmf5,UID:bcd9c0b6-9f72-44f5-b645-a3bf5d71dbe1,ResourceVersion:3050241,Generation:0,CreationTimestamp:2020-04-01 14:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc0025313e0 0xc0025313e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531460} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:26:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-j2h4d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j2h4d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-j2h4d,UID:6e9f3e6e-b06d-402d-bec8-df2dac6fd39b,ResourceVersion:3050287,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531550 0xc002531551}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025315d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025315f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-j5jq7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j5jq7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-j5jq7,UID:62a2f39f-4398-4456-9ea1-d2fdc426cf96,ResourceVersion:3050286,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531677 0xc002531678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025316f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-llt75" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-llt75,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-llt75,UID:327f8f60-4753-45b6-b0d4-b384f6bc81a1,ResourceVersion:3050221,Generation:0,CreationTimestamp:2020-04-01 14:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531797 0xc002531798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:26:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.591: INFO: Pod "nginx-deployment-55fb7cb77f-tjm7s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tjm7s,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-tjm7s,UID:8ed4c927-6983-4d86-863e-333aa8b8351d,ResourceVersion:3050300,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531900 0xc002531901}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025319a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-55fb7cb77f-tnbbv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tnbbv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-55fb7cb77f-tnbbv,UID:bf954642-b6d2-43df-954d-793d33360acb,ResourceVersion:3050299,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 950bd450-678a-44f7-9fb5-5c37e1f9e912 0xc002531a27 0xc002531a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-7b8c6f4498-4fr7s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4fr7s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-4fr7s,UID:e367738d-cbb0-4b32-8413-0c5fba56de4c,ResourceVersion:3050290,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc002531b47 0xc002531b48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531bc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-7b8c6f4498-9bf9z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9bf9z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-9bf9z,UID:d054e8ed-f6d6-45e4-a980-7d0bbb1a5204,ResourceVersion:3050174,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc002531c67 0xc002531c68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.17,StartTime:2020-04-01 14:26:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3bb77bccfb722483ef2a5185aa4b58a2670813960cab6ebda4bc09b00811f9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-7b8c6f4498-cj2lv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cj2lv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-cj2lv,UID:bb770718-01e8-4198-9534-a1885abbecc4,ResourceVersion:3050308,Generation:0,CreationTimestamp:2020-04-01 14:26:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc002531dd7 0xc002531dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-01 14:26:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-7b8c6f4498-df2cn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-df2cn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-df2cn,UID:323d5401-fc60-4740-9a93-37e5b7ec6daa,ResourceVersion:3050160,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc002531f37 0xc002531f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002531fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002531fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.48,StartTime:2020-04-01 14:26:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://97774b9a5c8ffb9946b38934e0fb56b8f2047ed0e0ad35e61931657fa55e86b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.592: INFO: Pod "nginx-deployment-7b8c6f4498-frx6b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-frx6b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-frx6b,UID:417f1a21-b4c7-43bd-8b92-b4ef4a1f0559,ResourceVersion:3050303,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306c0a7 0xc00306c0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306c120} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306c140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.593: INFO: Pod "nginx-deployment-7b8c6f4498-jsnqj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jsnqj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-jsnqj,UID:8c07df14-3194-4821-a8bf-23ef6a00ab8d,ResourceVersion:3050321,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306c1c7 0xc00306c1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306c240} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306c260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:26:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.593: INFO: Pod "nginx-deployment-7b8c6f4498-kw8z9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kw8z9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-kw8z9,UID:a0f94fbf-5d73-413a-b0f0-58c0f9509d66,ResourceVersion:3050298,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306c327 0xc00306c328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306c3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306c3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.593: INFO: Pod "nginx-deployment-7b8c6f4498-lpwkk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lpwkk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-lpwkk,UID:dbb02a61-7598-4abc-841b-32e62936f0b3,ResourceVersion:3050183,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306c457 0xc00306c458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306c4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306c4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.49,StartTime:2020-04-01 14:26:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b3088a3cc351b5c13103373a099cff5a20640ca20b2218de685dc3cb6039c4e3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.593: INFO: Pod "nginx-deployment-7b8c6f4498-lz66f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lz66f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-lz66f,UID:8d3cc943-f333-4d91-a3b2-2d5539f34676,ResourceVersion:3050310,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306c5c7 0xc00306c5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306c640} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306c660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:26:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.593: INFO: Pod "nginx-deployment-7b8c6f4498-mddbm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mddbm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-mddbm,UID:c955d31a-f358-45e1-a09e-15eac5090dcf,ResourceVersion:3050291,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306cae7 0xc00306cae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306cb70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306cb90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-mpxg5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mpxg5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-mpxg5,UID:8b4721c8-807d-4032-9f23-38a0d2853ba7,ResourceVersion:3050297,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306cc17 0xc00306cc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306cc90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306ccb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-n28fz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n28fz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-n28fz,UID:0344c67e-8279-4ad1-bafb-ea864becf8e6,ResourceVersion:3050294,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306cd37 0xc00306cd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306cdb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306cdd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-ndrdl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ndrdl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-ndrdl,UID:0a31e1c9-adb4-474c-bd98-120d4f5a66c7,ResourceVersion:3050301,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306ce57 0xc00306ce58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306ced0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306cef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-q67sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q67sx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-q67sx,UID:575888ef-6096-420a-9e37-68a50ba5e0a3,ResourceVersion:3050289,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306cf77 0xc00306cf78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306cff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-qp6qp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qp6qp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-qp6qp,UID:451195b4-4ecd-4329-ac19-05b857cac8ca,ResourceVersion:3050156,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d0a7 0xc00306d0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d120} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.47,StartTime:2020-04-01 14:26:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://679dbb74133827aa936b2c819ce2ce45919637cdb4cffa23ca7aac77cd3005b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-rdm5g" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rdm5g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-rdm5g,UID:8d7ddbca-f3f7-4ee2-b5c9-224baacc8e5b,ResourceVersion:3050163,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d217 0xc00306d218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d290} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.16,StartTime:2020-04-01 14:26:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://627faf6b2ff462c9bae5495f4aca10ca49f84f7434c536e23a45916b99243aff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.594: INFO: Pod "nginx-deployment-7b8c6f4498-s4fjh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s4fjh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-s4fjh,UID:4ddef7d1-d8d3-465e-8263-0fa9cb261725,ResourceVersion:3050192,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d387 0xc00306d388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d400} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.18,StartTime:2020-04-01 14:26:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f92785d0f758a722e48ebfababfcf24c347a80712793194eedbb081152b83505}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.595: INFO: Pod "nginx-deployment-7b8c6f4498-xcq6v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xcq6v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-xcq6v,UID:8ad98793-92c9-446b-b803-1997417eb438,ResourceVersion:3050288,Generation:0,CreationTimestamp:2020-04-01 14:26:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d4f7 0xc00306d4f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d570} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.595: INFO: Pod "nginx-deployment-7b8c6f4498-xj2nc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xj2nc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-xj2nc,UID:aa40dbdf-1bc8-4be5-9db5-800ada87829d,ResourceVersion:3050136,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d617 0xc00306d618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d690} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.46,StartTime:2020-04-01 14:26:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6ad4454e6c30730c7f7781f409d9aab0cbe8511bf1338ad852614fbbf934b802}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:26:40.595: INFO: Pod "nginx-deployment-7b8c6f4498-zcjs7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zcjs7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-7960,SelfLink:/api/v1/namespaces/deployment-7960/pods/nginx-deployment-7b8c6f4498-zcjs7,UID:98b8079d-df2f-4005-a1b4-85e3e720b342,ResourceVersion:3050146,Generation:0,CreationTimestamp:2020-04-01 14:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 4b9b107b-95ad-4ddc-8dae-4d0834b60670 0xc00306d787 0xc00306d788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5f4f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5f4f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f5f4f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00306d800} {node.kubernetes.io/unreachable Exists NoExecute 0xc00306d820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:26:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.15,StartTime:2020-04-01 14:26:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:26:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://504eee267783166710efa57246b4c128e93da7923f555a5716906874ae0ccb19}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:26:40.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7960" for this suite. Apr 1 14:26:56.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:26:56.994: INFO: namespace deployment-7960 deletion completed in 16.306343711s • [SLOW TEST:29.335 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:26:56.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:26:57.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c" in namespace "downward-api-5686" to be "success or failure" Apr 1 14:26:57.231: INFO: Pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973904ms Apr 1 14:26:59.322: INFO: Pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093616312s Apr 1 14:27:01.325: INFO: Pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c": Phase="Running", Reason="", readiness=true. Elapsed: 4.097088688s Apr 1 14:27:03.369: INFO: Pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141101898s STEP: Saw pod success Apr 1 14:27:03.369: INFO: Pod "downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c" satisfied condition "success or failure" Apr 1 14:27:03.372: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c container client-container: STEP: delete the pod Apr 1 14:27:03.388: INFO: Waiting for pod downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c to disappear Apr 1 14:27:03.407: INFO: Pod downwardapi-volume-87f8d800-0f51-4d90-bc92-484bb2fea23c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:27:03.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5686" for this suite. Apr 1 14:27:09.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:27:09.507: INFO: namespace downward-api-5686 deletion completed in 6.096605445s • [SLOW TEST:12.513 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:27:09.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:27:09.561: INFO: Creating ReplicaSet my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5 Apr 1 14:27:09.586: INFO: Pod name my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5: Found 0 pods out of 1 Apr 1 14:27:14.591: INFO: Pod name my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5: Found 1 pods out of 1 Apr 1 14:27:14.591: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5" is running Apr 1 14:27:14.594: INFO: Pod "my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5-kkx87" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:27:09 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:27:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:27:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-01 14:27:09 +0000 UTC Reason: Message:}]) Apr 1 14:27:14.594: INFO: Trying to dial the pod Apr 1 14:27:19.607: INFO: Controller my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5: Got expected result from replica 1 [my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5-kkx87]: "my-hostname-basic-05fef61d-d963-4bd4-8c74-5ec7de41c4c5-kkx87", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:27:19.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-360" for this suite. Apr 1 14:27:25.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:27:25.707: INFO: namespace replicaset-360 deletion completed in 6.095837288s • [SLOW TEST:16.200 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:27:25.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:27:25.753: INFO: Creating deployment "test-recreate-deployment" Apr 1 14:27:25.764: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 1 14:27:25.837: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 1 14:27:27.845: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 1 14:27:27.909: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348045, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348045, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348045, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348045, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:27:29.913: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 1 14:27:29.920: INFO: Updating deployment test-recreate-deployment Apr 1 14:27:29.920: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 1 14:27:30.148: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5262,SelfLink:/apis/apps/v1/namespaces/deployment-5262/deployments/test-recreate-deployment,UID:83701754-f60c-4c01-81f1-72273846b544,ResourceVersion:3050745,Generation:2,CreationTimestamp:2020-04-01 14:27:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-01 14:27:30 +0000 UTC 2020-04-01 14:27:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-01 14:27:30 +0000 UTC 2020-04-01 14:27:25 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 1 14:27:30.152: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5262,SelfLink:/apis/apps/v1/namespaces/deployment-5262/replicasets/test-recreate-deployment-5c8c9cc69d,UID:027f94b0-30c3-439c-81fa-922c3b6ff657,ResourceVersion:3050743,Generation:1,CreationTimestamp:2020-04-01 14:27:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 83701754-f60c-4c01-81f1-72273846b544 0xc003345d97 0xc003345d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:27:30.152: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 1 14:27:30.152: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5262,SelfLink:/apis/apps/v1/namespaces/deployment-5262/replicasets/test-recreate-deployment-6df85df6b9,UID:832f1613-dce5-4835-966b-1c57dbf515ab,ResourceVersion:3050733,Generation:2,CreationTimestamp:2020-04-01 14:27:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 83701754-f60c-4c01-81f1-72273846b544 0xc003345e67 0xc003345e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:27:30.155: INFO: Pod "test-recreate-deployment-5c8c9cc69d-wwc82" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-wwc82,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5262,SelfLink:/api/v1/namespaces/deployment-5262/pods/test-recreate-deployment-5c8c9cc69d-wwc82,UID:72a4c02e-a4fc-4b80-86e0-26063c08fe06,ResourceVersion:3050744,Generation:0,CreationTimestamp:2020-04-01 14:27:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 027f94b0-30c3-439c-81fa-922c3b6ff657 0xc002ca8777 0xc002ca8778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f74tc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f74tc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-f74tc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ca87f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ca8810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:27:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:27:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:27:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:27:30 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-01 14:27:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:27:30.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5262" for this suite. Apr 1 14:27:36.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:27:36.434: INFO: namespace deployment-5262 deletion completed in 6.275777842s • [SLOW TEST:10.726 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:27:36.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 1 14:27:36.502: INFO: Waiting up to 5m0s for pod "client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4" in namespace "containers-2436" to be "success or failure" Apr 1 14:27:36.506: INFO: Pod "client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.942507ms Apr 1 14:27:38.510: INFO: Pod "client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789218s Apr 1 14:27:40.519: INFO: Pod "client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016604783s STEP: Saw pod success Apr 1 14:27:40.519: INFO: Pod "client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4" satisfied condition "success or failure" Apr 1 14:27:40.522: INFO: Trying to get logs from node iruya-worker2 pod client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4 container test-container: STEP: delete the pod Apr 1 14:27:40.538: INFO: Waiting for pod client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4 to disappear Apr 1 14:27:40.542: INFO: Pod client-containers-e120e17b-1ba5-4594-869b-47ba887eb6c4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:27:40.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2436" for this suite. Apr 1 14:27:46.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:27:46.687: INFO: namespace containers-2436 deletion completed in 6.142448945s • [SLOW TEST:10.253 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:27:46.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:27:50.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5809" for this suite. Apr 1 14:27:56.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:27:57.016: INFO: namespace emptydir-wrapper-5809 deletion completed in 6.136561131s • [SLOW TEST:10.328 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:27:57.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 1 14:28:01.603: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-32 pod-service-account-e471b8c4-bdbd-4171-8684-bef4700a880d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 1 14:28:04.174: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-32 pod-service-account-e471b8c4-bdbd-4171-8684-bef4700a880d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 1 14:28:04.380: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-32 pod-service-account-e471b8c4-bdbd-4171-8684-bef4700a880d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:28:04.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-32" for this suite. Apr 1 14:28:10.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:28:10.673: INFO: namespace svcaccounts-32 deletion completed in 6.091367905s • [SLOW TEST:13.657 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:28:10.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 1 14:28:10.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:28:15.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2932" for this suite. Apr 1 14:28:21.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:28:21.924: INFO: namespace init-container-2932 deletion completed in 6.098263932s • [SLOW TEST:11.250 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:28:21.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-4a0a8e6d-0612-4dc5-8f8e-cec2b361d0ae STEP: Creating a pod to test consume configMaps Apr 1 14:28:22.031: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c" in namespace "projected-3604" to be "success or failure" Apr 1 14:28:22.047: INFO: Pod "pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.651164ms Apr 1 14:28:24.052: INFO: Pod "pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02095413s Apr 1 14:28:26.056: INFO: Pod "pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025383563s STEP: Saw pod success Apr 1 14:28:26.056: INFO: Pod "pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c" satisfied condition "success or failure" Apr 1 14:28:26.059: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c container projected-configmap-volume-test: STEP: delete the pod Apr 1 14:28:26.079: INFO: Waiting for pod pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c to disappear Apr 1 14:28:26.082: INFO: Pod pod-projected-configmaps-98e10466-04f8-4e6b-82c6-94057907532c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:28:26.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3604" for this suite. Apr 1 14:28:32.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:28:32.229: INFO: namespace projected-3604 deletion completed in 6.14059035s • [SLOW TEST:10.304 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:28:32.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 1 14:28:32.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7954' Apr 1 14:28:32.542: INFO: stderr: "" Apr 1 14:28:32.542: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 14:28:32.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7954' Apr 1 14:28:32.668: INFO: stderr: "" Apr 1 14:28:32.668: INFO: stdout: "update-demo-nautilus-6wdzk update-demo-nautilus-6z68b " Apr 1 14:28:32.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:28:32.758: INFO: stderr: "" Apr 1 14:28:32.758: INFO: stdout: "" Apr 1 14:28:32.758: INFO: update-demo-nautilus-6wdzk is created but not running Apr 1 14:28:37.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7954' Apr 1 14:28:37.867: INFO: stderr: "" Apr 1 14:28:37.867: INFO: stdout: "update-demo-nautilus-6wdzk update-demo-nautilus-6z68b " Apr 1 14:28:37.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdzk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:28:37.965: INFO: stderr: "" Apr 1 14:28:37.965: INFO: stdout: "true" Apr 1 14:28:37.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6wdzk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:28:38.053: INFO: stderr: "" Apr 1 14:28:38.053: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 14:28:38.053: INFO: validating pod update-demo-nautilus-6wdzk Apr 1 14:28:38.058: INFO: got data: { "image": "nautilus.jpg" } Apr 1 14:28:38.058: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 14:28:38.058: INFO: update-demo-nautilus-6wdzk is verified up and running Apr 1 14:28:38.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z68b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:28:38.159: INFO: stderr: "" Apr 1 14:28:38.159: INFO: stdout: "true" Apr 1 14:28:38.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6z68b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:28:38.273: INFO: stderr: "" Apr 1 14:28:38.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 1 14:28:38.273: INFO: validating pod update-demo-nautilus-6z68b Apr 1 14:28:38.278: INFO: got data: { "image": "nautilus.jpg" } Apr 1 14:28:38.278: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 1 14:28:38.278: INFO: update-demo-nautilus-6z68b is verified up and running STEP: rolling-update to new replication controller Apr 1 14:28:38.280: INFO: scanned /root for discovery docs: Apr 1 14:28:38.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7954' Apr 1 14:29:00.798: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 1 14:29:00.798: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 1 14:29:00.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7954' Apr 1 14:29:00.900: INFO: stderr: "" Apr 1 14:29:00.900: INFO: stdout: "update-demo-kitten-7jgh8 update-demo-kitten-bw7pm update-demo-nautilus-6z68b " STEP: Replicas for name=update-demo: expected=2 actual=3 Apr 1 14:29:05.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7954' Apr 1 14:29:06.003: INFO: stderr: "" Apr 1 14:29:06.003: INFO: stdout: "update-demo-kitten-7jgh8 update-demo-kitten-bw7pm " Apr 1 14:29:06.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7jgh8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:29:06.107: INFO: stderr: "" Apr 1 14:29:06.107: INFO: stdout: "true" Apr 1 14:29:06.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-7jgh8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:29:06.211: INFO: stderr: "" Apr 1 14:29:06.211: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 1 14:29:06.211: INFO: validating pod update-demo-kitten-7jgh8 Apr 1 14:29:06.215: INFO: got data: { "image": "kitten.jpg" } Apr 1 14:29:06.215: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 1 14:29:06.215: INFO: update-demo-kitten-7jgh8 is verified up and running Apr 1 14:29:06.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bw7pm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:29:06.308: INFO: stderr: "" Apr 1 14:29:06.308: INFO: stdout: "true" Apr 1 14:29:06.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bw7pm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7954' Apr 1 14:29:06.405: INFO: stderr: "" Apr 1 14:29:06.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 1 14:29:06.405: INFO: validating pod update-demo-kitten-bw7pm Apr 1 14:29:06.432: INFO: got data: { "image": "kitten.jpg" } Apr 1 14:29:06.432: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 1 14:29:06.432: INFO: update-demo-kitten-bw7pm is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:29:06.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7954" for this suite. Apr 1 14:29:28.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:29:28.559: INFO: namespace kubectl-7954 deletion completed in 22.115398463s • [SLOW TEST:56.329 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:29:28.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 1 14:29:28.606: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 1 14:29:28.633: INFO: Waiting for terminating namespaces to be deleted... Apr 1 14:29:28.635: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 1 14:29:28.643: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.643: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 14:29:28.643: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.643: INFO: Container kindnet-cni ready: true, restart count 0 Apr 1 14:29:28.643: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 1 14:29:28.651: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.651: INFO: Container coredns ready: true, restart count 0 Apr 1 14:29:28.651: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.651: INFO: Container coredns ready: true, restart count 0 Apr 1 14:29:28.651: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.651: INFO: Container kube-proxy ready: true, restart count 0 Apr 1 14:29:28.651: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 1 14:29:28.651: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1601b877d792be3a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:29:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7020" for this suite. Apr 1 14:29:35.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:29:35.770: INFO: namespace sched-pred-7020 deletion completed in 6.092855703s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.210 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:29:35.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:29:42.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5199" for this suite. Apr 1 14:29:48.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:29:48.152: INFO: namespace namespaces-5199 deletion completed in 6.108939783s STEP: Destroying namespace "nsdeletetest-6779" for this suite. Apr 1 14:29:48.154: INFO: Namespace nsdeletetest-6779 was already deleted STEP: Destroying namespace "nsdeletetest-6052" for this suite. Apr 1 14:29:54.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:29:54.266: INFO: namespace nsdeletetest-6052 deletion completed in 6.111086478s • [SLOW TEST:18.492 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:29:54.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-6ad61844-92de-4d79-aa64-cf470b55eda7 STEP: Creating a pod to test consume configMaps Apr 1 14:29:54.356: INFO: Waiting up to 5m0s for pod "pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2" in namespace "configmap-2341" to be "success or failure" Apr 1 14:29:54.372: INFO: Pod "pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.288378ms Apr 1 14:29:56.376: INFO: Pod "pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019163491s Apr 1 14:29:58.380: INFO: Pod "pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023692991s STEP: Saw pod success Apr 1 14:29:58.380: INFO: Pod "pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2" satisfied condition "success or failure" Apr 1 14:29:58.383: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2 container configmap-volume-test: STEP: delete the pod Apr 1 14:29:58.423: INFO: Waiting for pod pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2 to disappear Apr 1 14:29:58.471: INFO: Pod pod-configmaps-7928705a-e1a6-4caa-8539-4a25a3ead2b2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:29:58.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2341" for this suite. Apr 1 14:30:04.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:30:04.564: INFO: namespace configmap-2341 deletion completed in 6.089201651s • [SLOW TEST:10.298 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:30:04.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 1 14:30:04.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8752' Apr 1 14:30:04.896: INFO: stderr: "" Apr 1 14:30:04.896: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 1 14:30:05.902: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:05.902: INFO: Found 0 / 1 Apr 1 14:30:06.901: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:06.901: INFO: Found 0 / 1 Apr 1 14:30:07.901: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:07.901: INFO: Found 0 / 1 Apr 1 14:30:08.901: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:08.901: INFO: Found 1 / 1 Apr 1 14:30:08.901: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 1 14:30:08.905: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:08.905: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 1 14:30:08.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5wnx9 --namespace=kubectl-8752 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 1 14:30:09.014: INFO: stderr: "" Apr 1 14:30:09.014: INFO: stdout: "pod/redis-master-5wnx9 patched\n" STEP: checking annotations Apr 1 14:30:09.032: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:30:09.032: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:30:09.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8752" for this suite. Apr 1 14:30:31.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:30:31.122: INFO: namespace kubectl-8752 deletion completed in 22.086523467s • [SLOW TEST:26.557 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:30:31.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-156.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-156.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-156.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-156.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-156.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-156.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 1 14:30:35.260: INFO: DNS probes using dns-156/dns-test-ee5a7484-df55-43ee-b09a-6979ca6ee312 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:30:35.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-156" for this suite. Apr 1 14:30:41.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:30:41.395: INFO: namespace dns-156 deletion completed in 6.098205443s • [SLOW TEST:10.271 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:30:41.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 1 14:30:41.483: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e" in namespace "downward-api-9346" to be "success or failure" Apr 1 14:30:41.499: INFO: Pod "downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.620084ms Apr 1 14:30:43.504: INFO: Pod "downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020141852s Apr 1 14:30:45.507: INFO: Pod "downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023807789s STEP: Saw pod success Apr 1 14:30:45.507: INFO: Pod "downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e" satisfied condition "success or failure" Apr 1 14:30:45.510: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e container client-container: STEP: delete the pod Apr 1 14:30:45.546: INFO: Waiting for pod downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e to disappear Apr 1 14:30:45.559: INFO: Pod downwardapi-volume-3db22fc1-0e08-4f27-b186-13532954785e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:30:45.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9346" for this suite. Apr 1 14:30:51.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:30:51.655: INFO: namespace downward-api-9346 deletion completed in 6.09221598s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:30:51.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 1 14:30:51.693: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:30:59.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7399" for this suite. Apr 1 14:31:21.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:31:21.282: INFO: namespace init-container-7399 deletion completed in 22.094329492s • [SLOW TEST:29.627 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:31:21.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 1 14:31:21.367: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:31:28.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8601" for this suite. Apr 1 14:31:34.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:31:34.703: INFO: namespace init-container-8601 deletion completed in 6.117996353s • [SLOW TEST:13.420 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:31:34.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:31:34.751: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 1 14:31:34.807: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 1 14:31:39.811: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 1 14:31:39.811: INFO: Creating deployment "test-rolling-update-deployment" Apr 1 14:31:39.814: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 1 14:31:39.857: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 1 14:31:41.865: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 1 14:31:41.867: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348299, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348299, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348299, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721348299, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 1 14:31:43.872: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 1 14:31:43.882: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9690,SelfLink:/apis/apps/v1/namespaces/deployment-9690/deployments/test-rolling-update-deployment,UID:11455da6-5f62-4ae8-8dba-b5bf14dc95c0,ResourceVersion:3051818,Generation:1,CreationTimestamp:2020-04-01 14:31:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-01 14:31:39 +0000 UTC 2020-04-01 14:31:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-01 14:31:42 +0000 UTC 2020-04-01 14:31:39 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 1 14:31:43.886: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9690,SelfLink:/apis/apps/v1/namespaces/deployment-9690/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:2db46529-7ad6-4d65-a73f-8c18ca24b667,ResourceVersion:3051807,Generation:1,CreationTimestamp:2020-04-01 14:31:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 11455da6-5f62-4ae8-8dba-b5bf14dc95c0 0xc002d11797 0xc002d11798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 1 14:31:43.886: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 1 14:31:43.887: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9690,SelfLink:/apis/apps/v1/namespaces/deployment-9690/replicasets/test-rolling-update-controller,UID:1fb993c2-5029-4410-9203-d217725d92b8,ResourceVersion:3051816,Generation:2,CreationTimestamp:2020-04-01 14:31:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 11455da6-5f62-4ae8-8dba-b5bf14dc95c0 0xc002d116c7 0xc002d116c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:31:43.891: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-5hhdr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-5hhdr,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9690,SelfLink:/api/v1/namespaces/deployment-9690/pods/test-rolling-update-deployment-79f6b9d75c-5hhdr,UID:fbcde27d-ebf2-4264-a036-984cba408d1d,ResourceVersion:3051806,Generation:0,CreationTimestamp:2020-04-01 14:31:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 2db46529-7ad6-4d65-a73f-8c18ca24b667 0xc002a74077 0xc002a74078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tv8br {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tv8br,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-tv8br true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a740f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a74110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:31:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:31:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:31:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:31:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.43,StartTime:2020-04-01 14:31:39 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-01 14:31:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8982b9a7d0170c3cf61ae1987ba3fa4e2c0d58d501bf3f4fe89dcc088ccb5019}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:31:43.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9690" for this suite. Apr 1 14:31:49.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:31:49.993: INFO: namespace deployment-9690 deletion completed in 6.098131758s • [SLOW TEST:15.289 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:31:49.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 1 14:31:54.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-1a9f985f-8df2-43cf-ab4e-1491553ad728 -c busybox-main-container --namespace=emptydir-4329 -- cat /usr/share/volumeshare/shareddata.txt' Apr 1 14:31:54.335: INFO: stderr: "I0401 14:31:54.238600 3048 log.go:172] (0xc00068e630) (0xc000a348c0) Create stream\nI0401 14:31:54.239042 3048 log.go:172] (0xc00068e630) (0xc000a348c0) Stream added, broadcasting: 1\nI0401 14:31:54.243769 3048 log.go:172] (0xc00068e630) Reply frame received for 1\nI0401 14:31:54.243818 3048 log.go:172] (0xc00068e630) (0xc000a34000) Create stream\nI0401 14:31:54.243833 3048 log.go:172] (0xc00068e630) (0xc000a34000) Stream added, broadcasting: 3\nI0401 14:31:54.244668 3048 log.go:172] (0xc00068e630) Reply frame received for 3\nI0401 14:31:54.244708 3048 log.go:172] (0xc00068e630) (0xc0008e60a0) Create stream\nI0401 14:31:54.244723 3048 log.go:172] (0xc00068e630) (0xc0008e60a0) Stream added, broadcasting: 5\nI0401 14:31:54.245768 3048 log.go:172] (0xc00068e630) Reply frame received for 5\nI0401 14:31:54.328939 3048 log.go:172] (0xc00068e630) Data frame received for 3\nI0401 14:31:54.328966 3048 log.go:172] (0xc000a34000) (3) Data frame handling\nI0401 14:31:54.328986 3048 log.go:172] (0xc00068e630) Data frame received for 5\nI0401 14:31:54.329025 3048 log.go:172] (0xc0008e60a0) (5) Data frame handling\nI0401 14:31:54.329067 3048 log.go:172] (0xc000a34000) (3) Data frame sent\nI0401 14:31:54.329098 3048 log.go:172] (0xc00068e630) Data frame received for 3\nI0401 14:31:54.329261 3048 log.go:172] (0xc000a34000) (3) Data frame handling\nI0401 14:31:54.331024 3048 log.go:172] (0xc00068e630) Data frame received for 1\nI0401 14:31:54.331046 3048 log.go:172] (0xc000a348c0) (1) Data frame handling\nI0401 14:31:54.331054 3048 log.go:172] (0xc000a348c0) (1) Data frame sent\nI0401 14:31:54.331062 3048 log.go:172] (0xc00068e630) (0xc000a348c0) Stream removed, broadcasting: 1\nI0401 14:31:54.331119 3048 log.go:172] (0xc00068e630) Go away received\nI0401 14:31:54.331253 3048 log.go:172] (0xc00068e630) (0xc000a348c0) Stream removed, broadcasting: 1\nI0401 14:31:54.331264 3048 log.go:172] (0xc00068e630) (0xc000a34000) Stream removed, broadcasting: 3\nI0401 14:31:54.331269 3048 log.go:172] (0xc00068e630) (0xc0008e60a0) Stream removed, broadcasting: 5\n" Apr 1 14:31:54.335: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:31:54.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4329" for this suite. Apr 1 14:32:00.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:32:00.442: INFO: namespace emptydir-4329 deletion completed in 6.102120781s • [SLOW TEST:10.449 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:32:00.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 1 14:32:00.507: INFO: namespace kubectl-2719 Apr 1 14:32:00.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2719' Apr 1 14:32:00.789: INFO: stderr: "" Apr 1 14:32:00.789: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 1 14:32:01.793: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:32:01.793: INFO: Found 0 / 1 Apr 1 14:32:02.793: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:32:02.793: INFO: Found 0 / 1 Apr 1 14:32:03.794: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:32:03.794: INFO: Found 1 / 1 Apr 1 14:32:03.794: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 1 14:32:03.798: INFO: Selector matched 1 pods for map[app:redis] Apr 1 14:32:03.798: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 1 14:32:03.798: INFO: wait on redis-master startup in kubectl-2719 Apr 1 14:32:03.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k25fm redis-master --namespace=kubectl-2719' Apr 1 14:32:03.901: INFO: stderr: "" Apr 1 14:32:03.901: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 01 Apr 14:32:03.341 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 01 Apr 14:32:03.341 # Server started, Redis version 3.2.12\n1:M 01 Apr 14:32:03.341 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 01 Apr 14:32:03.341 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 1 14:32:03.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2719' Apr 1 14:32:04.053: INFO: stderr: "" Apr 1 14:32:04.053: INFO: stdout: "service/rm2 exposed\n" Apr 1 14:32:04.064: INFO: Service rm2 in namespace kubectl-2719 found. STEP: exposing service Apr 1 14:32:06.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2719' Apr 1 14:32:06.233: INFO: stderr: "" Apr 1 14:32:06.233: INFO: stdout: "service/rm3 exposed\n" Apr 1 14:32:06.245: INFO: Service rm3 in namespace kubectl-2719 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:32:08.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2719" for this suite. Apr 1 14:32:30.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:32:30.376: INFO: namespace kubectl-2719 deletion completed in 22.110203572s • [SLOW TEST:29.934 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:32:30.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 1 14:32:30.448: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:30.454: INFO: Number of nodes with available pods: 0 Apr 1 14:32:30.454: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:31.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:31.462: INFO: Number of nodes with available pods: 0 Apr 1 14:32:31.462: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:32.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:32.461: INFO: Number of nodes with available pods: 0 Apr 1 14:32:32.461: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:33.458: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:33.461: INFO: Number of nodes with available pods: 0 Apr 1 14:32:33.461: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:34.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:34.462: INFO: Number of nodes with available pods: 1 Apr 1 14:32:34.462: INFO: Node iruya-worker2 is running more than one daemon pod Apr 1 14:32:35.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:35.462: INFO: Number of nodes with available pods: 2 Apr 1 14:32:35.462: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 1 14:32:35.475: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:35.479: INFO: Number of nodes with available pods: 1 Apr 1 14:32:35.479: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:36.484: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:36.488: INFO: Number of nodes with available pods: 1 Apr 1 14:32:36.488: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:37.484: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:37.486: INFO: Number of nodes with available pods: 1 Apr 1 14:32:37.486: INFO: Node iruya-worker is running more than one daemon pod Apr 1 14:32:38.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 1 14:32:38.488: INFO: Number of nodes with available pods: 2 Apr 1 14:32:38.488: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9081, will wait for the garbage collector to delete the pods Apr 1 14:32:38.553: INFO: Deleting DaemonSet.extensions daemon-set took: 6.825718ms Apr 1 14:32:38.854: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.23968ms Apr 1 14:32:51.958: INFO: Number of nodes with available pods: 0 Apr 1 14:32:51.958: INFO: Number of running nodes: 0, number of available pods: 0 Apr 1 14:32:51.960: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9081/daemonsets","resourceVersion":"3052107"},"items":null} Apr 1 14:32:51.963: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9081/pods","resourceVersion":"3052107"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:32:51.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9081" for this suite. Apr 1 14:32:57.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:32:58.068: INFO: namespace daemonsets-9081 deletion completed in 6.09166611s • [SLOW TEST:27.691 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:32:58.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-tgjl STEP: Creating a pod to test atomic-volume-subpath Apr 1 14:32:58.145: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-tgjl" in namespace "subpath-8479" to be "success or failure" Apr 1 14:32:58.164: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Pending", Reason="", readiness=false. Elapsed: 19.033598ms Apr 1 14:33:00.169: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023582389s Apr 1 14:33:02.173: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 4.027961734s Apr 1 14:33:04.178: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 6.032251355s Apr 1 14:33:06.182: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 8.036576542s Apr 1 14:33:08.186: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 10.040898172s Apr 1 14:33:10.190: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 12.04473459s Apr 1 14:33:12.195: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 14.049177291s Apr 1 14:33:14.199: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 16.053501602s Apr 1 14:33:16.203: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 18.057883838s Apr 1 14:33:18.208: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 20.062106473s Apr 1 14:33:20.212: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Running", Reason="", readiness=true. Elapsed: 22.066488104s Apr 1 14:33:22.216: INFO: Pod "pod-subpath-test-configmap-tgjl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.070515321s STEP: Saw pod success Apr 1 14:33:22.216: INFO: Pod "pod-subpath-test-configmap-tgjl" satisfied condition "success or failure" Apr 1 14:33:22.219: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-tgjl container test-container-subpath-configmap-tgjl: STEP: delete the pod Apr 1 14:33:22.281: INFO: Waiting for pod pod-subpath-test-configmap-tgjl to disappear Apr 1 14:33:22.288: INFO: Pod pod-subpath-test-configmap-tgjl no longer exists STEP: Deleting pod pod-subpath-test-configmap-tgjl Apr 1 14:33:22.288: INFO: Deleting pod "pod-subpath-test-configmap-tgjl" in namespace "subpath-8479" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:33:22.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8479" for this suite. Apr 1 14:33:28.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:33:28.387: INFO: namespace subpath-8479 deletion completed in 6.093586468s • [SLOW TEST:30.319 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:33:28.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 1 14:33:28.444: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 1 14:33:33.449: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 1 14:33:33.449: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 1 14:33:33.511: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7368,SelfLink:/apis/apps/v1/namespaces/deployment-7368/deployments/test-cleanup-deployment,UID:c376e69b-436c-4bce-9a67-9844505e79bd,ResourceVersion:3052258,Generation:1,CreationTimestamp:2020-04-01 14:33:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 1 14:33:33.517: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7368,SelfLink:/apis/apps/v1/namespaces/deployment-7368/replicasets/test-cleanup-deployment-55bbcbc84c,UID:7faa8c98-940f-4dc9-acb9-a17a816f6afc,ResourceVersion:3052260,Generation:1,CreationTimestamp:2020-04-01 14:33:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c376e69b-436c-4bce-9a67-9844505e79bd 0xc001c5be97 0xc001c5be98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 1 14:33:33.517: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 1 14:33:33.518: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7368,SelfLink:/apis/apps/v1/namespaces/deployment-7368/replicasets/test-cleanup-controller,UID:c973ba04-d1f2-4f63-a2f3-b3f298692aa4,ResourceVersion:3052259,Generation:1,CreationTimestamp:2020-04-01 14:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c376e69b-436c-4bce-9a67-9844505e79bd 0xc001c5bdc7 0xc001c5bdc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 1 14:33:33.574: INFO: Pod "test-cleanup-controller-mlw8b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mlw8b,GenerateName:test-cleanup-controller-,Namespace:deployment-7368,SelfLink:/api/v1/namespaces/deployment-7368/pods/test-cleanup-controller-mlw8b,UID:1d3d140b-b6cc-49d7-9121-2b0862b91730,ResourceVersion:3052250,Generation:0,CreationTimestamp:2020-04-01 14:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c973ba04-d1f2-4f63-a2f3-b3f298692aa4 0xc0004b3517 0xc0004b3518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9czc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9czc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-c9czc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b3700} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b3720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:33:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:33:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:33:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:33:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.47,StartTime:2020-04-01 14:33:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-01 14:33:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b10363bfe65defdb03b22b544dde236ad4e5a5010cda4b10d87fc5e1b3c6b1f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 1 14:33:33.574: INFO: Pod "test-cleanup-deployment-55bbcbc84c-4ncwz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-4ncwz,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7368,SelfLink:/api/v1/namespaces/deployment-7368/pods/test-cleanup-deployment-55bbcbc84c-4ncwz,UID:22c8e683-8463-411a-8784-3545aa276c15,ResourceVersion:3052264,Generation:0,CreationTimestamp:2020-04-01 14:33:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 7faa8c98-940f-4dc9-acb9-a17a816f6afc 0xc0004b39d7 0xc0004b39d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-c9czc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-c9czc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-c9czc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004b3b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004b3b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-01 14:33:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:33:33.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7368" for this suite. Apr 1 14:33:39.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:33:39.747: INFO: namespace deployment-7368 deletion completed in 6.147651752s • [SLOW TEST:11.360 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:33:39.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:33:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4955" for this suite. Apr 1 14:34:33.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:34:33.931: INFO: namespace kubelet-test-4955 deletion completed in 50.09548971s • [SLOW TEST:54.184 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:34:33.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:35:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2618" for this suite. Apr 1 14:35:56.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:35:56.140: INFO: namespace container-probe-2618 deletion completed in 22.112053367s • [SLOW TEST:82.209 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:35:56.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 1 14:35:56.219: INFO: Waiting up to 5m0s for pod "pod-ca407478-dd96-4967-b506-878c5f86098f" in namespace "emptydir-7797" to be "success or failure" Apr 1 14:35:56.223: INFO: Pod "pod-ca407478-dd96-4967-b506-878c5f86098f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330244ms Apr 1 14:35:58.239: INFO: Pod "pod-ca407478-dd96-4967-b506-878c5f86098f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020252699s Apr 1 14:36:00.243: INFO: Pod "pod-ca407478-dd96-4967-b506-878c5f86098f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024360459s STEP: Saw pod success Apr 1 14:36:00.243: INFO: Pod "pod-ca407478-dd96-4967-b506-878c5f86098f" satisfied condition "success or failure" Apr 1 14:36:00.246: INFO: Trying to get logs from node iruya-worker pod pod-ca407478-dd96-4967-b506-878c5f86098f container test-container: STEP: delete the pod Apr 1 14:36:00.291: INFO: Waiting for pod pod-ca407478-dd96-4967-b506-878c5f86098f to disappear Apr 1 14:36:00.295: INFO: Pod pod-ca407478-dd96-4967-b506-878c5f86098f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:36:00.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7797" for this suite. Apr 1 14:36:06.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:36:06.425: INFO: namespace emptydir-7797 deletion completed in 6.127011437s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 1 14:36:06.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-472e0caa-f1e2-406a-85db-ee3d8a64e985 STEP: Creating a pod to test consume secrets Apr 1 14:36:06.495: INFO: Waiting up to 5m0s for pod "pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f" in namespace "secrets-8787" to be "success or failure" Apr 1 14:36:06.499: INFO: Pod "pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900608ms Apr 1 14:36:08.505: INFO: Pod "pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010009482s Apr 1 14:36:10.509: INFO: Pod "pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014129955s STEP: Saw pod success Apr 1 14:36:10.509: INFO: Pod "pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f" satisfied condition "success or failure" Apr 1 14:36:10.512: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f container secret-volume-test: STEP: delete the pod Apr 1 14:36:10.527: INFO: Waiting for pod pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f to disappear Apr 1 14:36:10.531: INFO: Pod pod-secrets-1debea3e-2635-4aa0-8466-b4969fd1a55f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 1 14:36:10.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8787" for this suite. Apr 1 14:36:16.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 1 14:36:16.624: INFO: namespace secrets-8787 deletion completed in 6.089911285s • [SLOW TEST:10.197 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ Apr 1 14:36:16.624: INFO: Running AfterSuite actions on all nodes Apr 1 14:36:16.624: INFO: Running AfterSuite actions on node 1 Apr 1 14:36:16.624: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6032.426 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS