I1231 12:56:09.772398 8 e2e.go:243] Starting e2e run "1896ca61-1857-4545-b9a3-6049b0001f72" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1577796968 - Will randomize all specs Will run 215 of 4412 specs Dec 31 12:56:10.081: INFO: >>> kubeConfig: /root/.kube/config Dec 31 12:56:10.088: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 31 12:56:10.131: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 31 12:56:10.166: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 31 12:56:10.166: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 31 12:56:10.166: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 31 12:56:10.182: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 31 12:56:10.182: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 31 12:56:10.182: INFO: e2e test version: v1.15.7 Dec 31 12:56:10.183: INFO: kube-apiserver version: v1.15.1 SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:56:10.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Dec 31 12:56:10.288: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1231 12:56:40.502071 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 31 12:56:40.502: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:56:40.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9321" for this suite. Dec 31 12:56:50.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:56:51.146: INFO: namespace gc-9321 deletion completed in 10.617103614s • [SLOW TEST:40.962 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:56:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d6745063-5414-417a-9f8a-c2e086881d52 STEP: Creating a pod to test consume secrets Dec 31 12:56:51.311: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f" in namespace "projected-8597" to be "success or failure" Dec 31 12:56:51.322: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.773244ms Dec 31 12:56:53.329: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017902546s Dec 31 12:56:55.344: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032463357s Dec 31 12:56:57.360: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048311583s Dec 31 12:56:59.368: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056775053s Dec 31 12:57:01.377: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065417566s STEP: Saw pod success Dec 31 12:57:01.377: INFO: Pod "pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f" satisfied condition "success or failure" Dec 31 12:57:01.382: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f container projected-secret-volume-test: STEP: delete the pod Dec 31 12:57:01.565: INFO: Waiting for pod pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f to disappear Dec 31 12:57:01.579: INFO: Pod pod-projected-secrets-b67174c2-d676-4cf7-a442-a83297716e7f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:57:01.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8597" for this suite. Dec 31 12:57:07.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:57:07.756: INFO: namespace projected-8597 deletion completed in 6.171148426s • [SLOW TEST:16.610 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:57:07.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Dec 31 12:57:07.949: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-614,SelfLink:/api/v1/namespaces/watch-614/configmaps/e2e-watch-test-watch-closed,UID:dc8f9347-9abc-40c8-b225-f81aad4d1583,ResourceVersion:18765810,Generation:0,CreationTimestamp:2019-12-31 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 31 12:57:07.950: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-614,SelfLink:/api/v1/namespaces/watch-614/configmaps/e2e-watch-test-watch-closed,UID:dc8f9347-9abc-40c8-b225-f81aad4d1583,ResourceVersion:18765811,Generation:0,CreationTimestamp:2019-12-31 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Dec 31 12:57:07.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-614,SelfLink:/api/v1/namespaces/watch-614/configmaps/e2e-watch-test-watch-closed,UID:dc8f9347-9abc-40c8-b225-f81aad4d1583,ResourceVersion:18765812,Generation:0,CreationTimestamp:2019-12-31 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 31 12:57:07.981: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-614,SelfLink:/api/v1/namespaces/watch-614/configmaps/e2e-watch-test-watch-closed,UID:dc8f9347-9abc-40c8-b225-f81aad4d1583,ResourceVersion:18765813,Generation:0,CreationTimestamp:2019-12-31 12:57:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:57:07.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-614" for this suite. Dec 31 12:57:14.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:57:14.134: INFO: namespace watch-614 deletion completed in 6.141204019s • [SLOW TEST:6.378 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:57:14.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 31 12:57:14.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-305' Dec 31 12:57:16.307: INFO: stderr: "" Dec 31 12:57:16.307: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 12:57:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-305' Dec 31 12:57:16.532: INFO: stderr: "" Dec 31 12:57:16.533: INFO: stdout: "update-demo-nautilus-576ft update-demo-nautilus-jjxds " Dec 31 12:57:16.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:16.693: INFO: stderr: "" Dec 31 12:57:16.694: INFO: stdout: "" Dec 31 12:57:16.694: INFO: update-demo-nautilus-576ft is created but not running Dec 31 12:57:21.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-305' Dec 31 12:57:21.931: INFO: stderr: "" Dec 31 12:57:21.932: INFO: stdout: "update-demo-nautilus-576ft update-demo-nautilus-jjxds " Dec 31 12:57:21.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:22.153: INFO: stderr: "" Dec 31 12:57:22.153: INFO: stdout: "" Dec 31 12:57:22.153: INFO: update-demo-nautilus-576ft is created but not running Dec 31 12:57:27.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-305' Dec 31 12:57:27.314: INFO: stderr: "" Dec 31 12:57:27.314: INFO: stdout: "update-demo-nautilus-576ft update-demo-nautilus-jjxds " Dec 31 12:57:27.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:27.520: INFO: stderr: "" Dec 31 12:57:27.520: INFO: stdout: "true" Dec 31 12:57:27.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:27.714: INFO: stderr: "" Dec 31 12:57:27.714: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 12:57:27.714: INFO: validating pod update-demo-nautilus-576ft Dec 31 12:57:27.835: INFO: got data: { "image": "nautilus.jpg" } Dec 31 12:57:27.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 12:57:27.836: INFO: update-demo-nautilus-576ft is verified up and running Dec 31 12:57:27.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjxds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:28.085: INFO: stderr: "" Dec 31 12:57:28.085: INFO: stdout: "" Dec 31 12:57:28.085: INFO: update-demo-nautilus-jjxds is created but not running Dec 31 12:57:33.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-305' Dec 31 12:57:33.332: INFO: stderr: "" Dec 31 12:57:33.332: INFO: stdout: "update-demo-nautilus-576ft update-demo-nautilus-jjxds " Dec 31 12:57:33.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:33.433: INFO: stderr: "" Dec 31 12:57:33.433: INFO: stdout: "true" Dec 31 12:57:33.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-576ft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:33.543: INFO: stderr: "" Dec 31 12:57:33.543: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 12:57:33.543: INFO: validating pod update-demo-nautilus-576ft Dec 31 12:57:33.572: INFO: got data: { "image": "nautilus.jpg" } Dec 31 12:57:33.572: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 12:57:33.572: INFO: update-demo-nautilus-576ft is verified up and running Dec 31 12:57:33.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjxds -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:33.688: INFO: stderr: "" Dec 31 12:57:33.688: INFO: stdout: "true" Dec 31 12:57:33.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjxds -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-305' Dec 31 12:57:33.816: INFO: stderr: "" Dec 31 12:57:33.816: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 12:57:33.816: INFO: validating pod update-demo-nautilus-jjxds Dec 31 12:57:33.838: INFO: got data: { "image": "nautilus.jpg" } Dec 31 12:57:33.839: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 12:57:33.839: INFO: update-demo-nautilus-jjxds is verified up and running STEP: using delete to clean up resources Dec 31 12:57:33.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-305' Dec 31 12:57:33.978: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 12:57:33.978: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 31 12:57:33.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-305' Dec 31 12:57:34.158: INFO: stderr: "No resources found.\n" Dec 31 12:57:34.158: INFO: stdout: "" Dec 31 12:57:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-305 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 31 12:57:34.334: INFO: stderr: "" Dec 31 12:57:34.335: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:57:34.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-305" for this suite. Dec 31 12:57:56.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:57:56.510: INFO: namespace kubectl-305 deletion completed in 22.158311864s • [SLOW TEST:42.375 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:57:56.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:58:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7350" for this suite. Dec 31 12:58:48.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:58:48.936: INFO: namespace kubelet-test-7350 deletion completed in 42.148331595s • [SLOW TEST:52.426 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:58:48.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Dec 31 12:58:49.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9643 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Dec 31 12:58:59.911: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" Dec 31 12:58:59.911: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:59:01.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9643" for this suite. Dec 31 12:59:07.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:59:08.077: INFO: namespace kubectl-9643 deletion completed in 6.147675358s • [SLOW TEST:19.140 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:59:08.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b4b94d0d-5379-4163-b681-706bc47a0639 STEP: Creating a pod to test consume configMaps Dec 31 12:59:08.210: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20" in namespace "projected-2225" to be "success or failure" Dec 31 12:59:08.220: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.796287ms Dec 31 12:59:10.227: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016559851s Dec 31 12:59:12.232: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022032436s Dec 31 12:59:14.253: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042592905s Dec 31 12:59:16.259: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048512741s STEP: Saw pod success Dec 31 12:59:16.259: INFO: Pod "pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20" satisfied condition "success or failure" Dec 31 12:59:16.262: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20 container projected-configmap-volume-test: STEP: delete the pod Dec 31 12:59:16.413: INFO: Waiting for pod pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20 to disappear Dec 31 12:59:16.451: INFO: Pod pod-projected-configmaps-b03e97cd-946c-4843-973a-b9d39d440f20 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:59:16.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2225" for this suite. Dec 31 12:59:22.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:59:22.644: INFO: namespace projected-2225 deletion completed in 6.18730632s • [SLOW TEST:14.567 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:59:22.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-z4bw STEP: Creating a pod to test atomic-volume-subpath Dec 31 12:59:22.906: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-z4bw" in namespace "subpath-3821" to be "success or failure" Dec 31 12:59:22.914: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Pending", Reason="", readiness=false. Elapsed: 7.48583ms Dec 31 12:59:24.924: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017601664s Dec 31 12:59:26.939: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032866839s Dec 31 12:59:28.952: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046177157s Dec 31 12:59:30.968: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 8.06138711s Dec 31 12:59:32.982: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 10.075843569s Dec 31 12:59:34.988: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 12.081438274s Dec 31 12:59:36.995: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 14.089218413s Dec 31 12:59:39.004: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 16.097938819s Dec 31 12:59:41.015: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 18.108251583s Dec 31 12:59:43.028: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 20.12138731s Dec 31 12:59:45.037: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 22.130435833s Dec 31 12:59:47.046: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 24.139245952s Dec 31 12:59:49.055: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 26.148864346s Dec 31 12:59:51.063: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Running", Reason="", readiness=true. Elapsed: 28.157092639s Dec 31 12:59:53.073: INFO: Pod "pod-subpath-test-secret-z4bw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.166789545s STEP: Saw pod success Dec 31 12:59:53.073: INFO: Pod "pod-subpath-test-secret-z4bw" satisfied condition "success or failure" Dec 31 12:59:53.078: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-z4bw container test-container-subpath-secret-z4bw: STEP: delete the pod Dec 31 12:59:53.226: INFO: Waiting for pod pod-subpath-test-secret-z4bw to disappear Dec 31 12:59:53.249: INFO: Pod pod-subpath-test-secret-z4bw no longer exists STEP: Deleting pod pod-subpath-test-secret-z4bw Dec 31 12:59:53.249: INFO: Deleting pod "pod-subpath-test-secret-z4bw" in namespace "subpath-3821" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 12:59:53.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3821" for this suite. Dec 31 12:59:59.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 12:59:59.564: INFO: namespace subpath-3821 deletion completed in 6.307385498s • [SLOW TEST:36.920 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 12:59:59.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Dec 31 12:59:59.637: INFO: Waiting up to 5m0s for pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27" in namespace "var-expansion-7666" to be "success or failure" Dec 31 12:59:59.647: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27": Phase="Pending", Reason="", readiness=false. Elapsed: 9.002921ms Dec 31 13:00:01.659: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021732471s Dec 31 13:00:03.668: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030374226s Dec 31 13:00:05.674: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036645192s Dec 31 13:00:07.680: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041978459s STEP: Saw pod success Dec 31 13:00:07.680: INFO: Pod "var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27" satisfied condition "success or failure" Dec 31 13:00:07.683: INFO: Trying to get logs from node iruya-node pod var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27 container dapi-container: STEP: delete the pod Dec 31 13:00:07.811: INFO: Waiting for pod var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27 to disappear Dec 31 13:00:07.819: INFO: Pod var-expansion-5b42ddfc-4042-42e2-b788-b33a65bc5a27 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:00:07.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7666" for this suite. Dec 31 13:00:13.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:00:13.960: INFO: namespace var-expansion-7666 deletion completed in 6.131933885s • [SLOW TEST:14.396 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:00:13.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 31 13:00:30.246: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:30.251: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:32.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:32.277: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:34.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:34.259: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:36.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:36.271: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:38.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:38.260: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:40.252: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:40.263: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:42.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:42.261: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:44.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:44.259: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:46.251: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:46.259: INFO: Pod pod-with-prestop-http-hook still exists Dec 31 13:00:48.252: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 31 13:00:48.261: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:00:48.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7016" for this suite. Dec 31 13:01:10.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:01:10.445: INFO: namespace container-lifecycle-hook-7016 deletion completed in 22.154128373s • [SLOW TEST:56.485 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:01:10.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:01:10.540: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:01:19.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3729" for this suite. Dec 31 13:02:01.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:02:01.157: INFO: namespace pods-3729 deletion completed in 42.131453506s • [SLOW TEST:50.711 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:02:01.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Dec 31 13:02:01.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5977' Dec 31 13:02:01.818: INFO: stderr: "" Dec 31 13:02:01.819: INFO: stdout: "pod/pause created\n" Dec 31 13:02:01.819: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 31 13:02:01.819: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5977" to be "running and ready" Dec 31 13:02:01.938: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 119.437384ms Dec 31 13:02:03.955: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135822311s Dec 31 13:02:05.963: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144302364s Dec 31 13:02:08.009: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190360412s Dec 31 13:02:10.024: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.205196911s Dec 31 13:02:10.024: INFO: Pod "pause" satisfied condition "running and ready" Dec 31 13:02:10.024: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Dec 31 13:02:10.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5977' Dec 31 13:02:10.162: INFO: stderr: "" Dec 31 13:02:10.162: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 31 13:02:10.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5977' Dec 31 13:02:10.316: INFO: stderr: "" Dec 31 13:02:10.316: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 31 13:02:10.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5977' Dec 31 13:02:10.430: INFO: stderr: "" Dec 31 13:02:10.430: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 31 13:02:10.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5977' Dec 31 13:02:10.677: INFO: stderr: "" Dec 31 13:02:10.677: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Dec 31 13:02:10.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5977' Dec 31 13:02:10.844: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 13:02:10.844: INFO: stdout: "pod \"pause\" force deleted\n" Dec 31 13:02:10.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5977' Dec 31 13:02:11.086: INFO: stderr: "No resources found.\n" Dec 31 13:02:11.086: INFO: stdout: "" Dec 31 13:02:11.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5977 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 31 13:02:11.215: INFO: stderr: "" Dec 31 13:02:11.215: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:02:11.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5977" for this suite. Dec 31 13:02:17.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:02:17.977: INFO: namespace kubectl-5977 deletion completed in 6.756492117s • [SLOW TEST:16.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:02:17.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9e5df1a9-50bc-4e75-8fd7-006be5a57f79 STEP: Creating a pod to test consume configMaps Dec 31 13:02:18.101: INFO: Waiting up to 5m0s for pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965" in namespace "configmap-5055" to be "success or failure" Dec 31 13:02:18.110: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Pending", Reason="", readiness=false. Elapsed: 9.027585ms Dec 31 13:02:20.118: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017124878s Dec 31 13:02:22.140: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039159413s Dec 31 13:02:24.154: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053193376s Dec 31 13:02:26.163: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061874317s Dec 31 13:02:28.189: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088176304s STEP: Saw pod success Dec 31 13:02:28.189: INFO: Pod "pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965" satisfied condition "success or failure" Dec 31 13:02:28.196: INFO: Trying to get logs from node iruya-node pod pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965 container configmap-volume-test: STEP: delete the pod Dec 31 13:02:28.467: INFO: Waiting for pod pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965 to disappear Dec 31 13:02:28.478: INFO: Pod pod-configmaps-93efa94a-04ff-458b-8c64-4c00cb879965 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:02:28.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5055" for this suite. Dec 31 13:02:34.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:02:34.713: INFO: namespace configmap-5055 deletion completed in 6.226563185s • [SLOW TEST:16.736 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:02:34.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Dec 31 13:02:34.889: INFO: Waiting up to 5m0s for pod "pod-0590be51-1754-4a76-926f-fb1fe8550336" in namespace "emptydir-727" to be "success or failure" Dec 31 13:02:34.916: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Pending", Reason="", readiness=false. Elapsed: 27.486828ms Dec 31 13:02:36.928: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038895871s Dec 31 13:02:38.935: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046230026s Dec 31 13:02:40.947: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058531984s Dec 31 13:02:42.953: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064285879s Dec 31 13:02:44.959: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070576702s STEP: Saw pod success Dec 31 13:02:44.959: INFO: Pod "pod-0590be51-1754-4a76-926f-fb1fe8550336" satisfied condition "success or failure" Dec 31 13:02:44.962: INFO: Trying to get logs from node iruya-node pod pod-0590be51-1754-4a76-926f-fb1fe8550336 container test-container: STEP: delete the pod Dec 31 13:02:45.019: INFO: Waiting for pod pod-0590be51-1754-4a76-926f-fb1fe8550336 to disappear Dec 31 13:02:45.100: INFO: Pod pod-0590be51-1754-4a76-926f-fb1fe8550336 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:02:45.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-727" for this suite. Dec 31 13:02:51.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:02:51.223: INFO: namespace emptydir-727 deletion completed in 6.114033228s • [SLOW TEST:16.510 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:02:51.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 31 13:03:01.507: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 31 13:03:21.678: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:03:21.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4106" for this suite. Dec 31 13:03:27.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:03:27.838: INFO: namespace pods-4106 deletion completed in 6.151185239s • [SLOW TEST:36.614 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:03:27.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 31 13:03:27.995: INFO: Waiting up to 5m0s for pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee" in namespace "emptydir-4784" to be "success or failure" Dec 31 13:03:28.007: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Pending", Reason="", readiness=false. Elapsed: 11.882355ms Dec 31 13:03:30.026: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031128794s Dec 31 13:03:32.058: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062989205s Dec 31 13:03:34.063: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068425477s Dec 31 13:03:36.287: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292295049s Dec 31 13:03:38.337: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.342210254s STEP: Saw pod success Dec 31 13:03:38.337: INFO: Pod "pod-449a63d7-d3f4-465d-8f94-90982d0b00ee" satisfied condition "success or failure" Dec 31 13:03:38.369: INFO: Trying to get logs from node iruya-node pod pod-449a63d7-d3f4-465d-8f94-90982d0b00ee container test-container: STEP: delete the pod Dec 31 13:03:38.557: INFO: Waiting for pod pod-449a63d7-d3f4-465d-8f94-90982d0b00ee to disappear Dec 31 13:03:38.635: INFO: Pod pod-449a63d7-d3f4-465d-8f94-90982d0b00ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:03:38.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4784" for this suite. Dec 31 13:03:44.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:03:44.779: INFO: namespace emptydir-4784 deletion completed in 6.134823063s • [SLOW TEST:16.940 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:03:44.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:03:44.961: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Dec 31 13:03:48.694: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:03:48.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7812" for this suite. Dec 31 13:03:58.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:03:58.530: INFO: namespace replication-controller-7812 deletion completed in 9.769409605s • [SLOW TEST:13.750 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:03:58.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Dec 31 13:04:00.307: INFO: Waiting up to 5m0s for pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67" in namespace "emptydir-1630" to be "success or failure" Dec 31 13:04:00.337: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67": Phase="Pending", Reason="", readiness=false. Elapsed: 29.283632ms Dec 31 13:04:02.346: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038593012s Dec 31 13:04:04.352: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04497062s Dec 31 13:04:06.359: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05195464s Dec 31 13:04:08.367: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059834362s STEP: Saw pod success Dec 31 13:04:08.367: INFO: Pod "pod-b5157360-5f70-4a1e-83ad-5007161bcf67" satisfied condition "success or failure" Dec 31 13:04:08.371: INFO: Trying to get logs from node iruya-node pod pod-b5157360-5f70-4a1e-83ad-5007161bcf67 container test-container: STEP: delete the pod Dec 31 13:04:08.417: INFO: Waiting for pod pod-b5157360-5f70-4a1e-83ad-5007161bcf67 to disappear Dec 31 13:04:08.546: INFO: Pod pod-b5157360-5f70-4a1e-83ad-5007161bcf67 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:04:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1630" for this suite. Dec 31 13:04:14.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:04:14.707: INFO: namespace emptydir-1630 deletion completed in 6.149582346s • [SLOW TEST:16.177 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:04:14.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a227b4a3-1788-408e-92f4-8cee9e103d6a STEP: Creating a pod to test consume secrets Dec 31 13:04:14.833: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984" in namespace "projected-1640" to be "success or failure" Dec 31 13:04:14.845: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984": Phase="Pending", Reason="", readiness=false. Elapsed: 11.385808ms Dec 31 13:04:16.855: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021383261s Dec 31 13:04:18.866: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032834779s Dec 31 13:04:20.876: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042879356s Dec 31 13:04:22.893: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059930718s STEP: Saw pod success Dec 31 13:04:22.893: INFO: Pod "pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984" satisfied condition "success or failure" Dec 31 13:04:22.899: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984 container projected-secret-volume-test: STEP: delete the pod Dec 31 13:04:22.961: INFO: Waiting for pod pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984 to disappear Dec 31 13:04:23.034: INFO: Pod pod-projected-secrets-5ab2829c-49ec-45f3-81ab-91ca0f529984 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:04:23.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1640" for this suite. Dec 31 13:04:29.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:04:29.179: INFO: namespace projected-1640 deletion completed in 6.137459427s • [SLOW TEST:14.472 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:04:29.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Dec 31 13:04:40.527: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:04:41.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1086" for this suite. Dec 31 13:05:05.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:05:05.751: INFO: namespace replicaset-1086 deletion completed in 24.145030977s • [SLOW TEST:36.571 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:05:05.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b0c6b5ba-e3b7-4dc9-ab27-4dfbcbe7213d STEP: Creating a pod to test consume secrets Dec 31 13:05:05.964: INFO: Waiting up to 5m0s for pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec" in namespace "secrets-3785" to be "success or failure" Dec 31 13:05:06.108: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Pending", Reason="", readiness=false. Elapsed: 143.100922ms Dec 31 13:05:08.119: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154425113s Dec 31 13:05:10.244: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279009953s Dec 31 13:05:12.251: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.28669632s Dec 31 13:05:14.258: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292909149s Dec 31 13:05:16.267: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.302456986s STEP: Saw pod success Dec 31 13:05:16.267: INFO: Pod "pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec" satisfied condition "success or failure" Dec 31 13:05:16.273: INFO: Trying to get logs from node iruya-node pod pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec container secret-env-test: STEP: delete the pod Dec 31 13:05:16.349: INFO: Waiting for pod pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec to disappear Dec 31 13:05:16.354: INFO: Pod pod-secrets-2a468e9c-2fd8-4c24-bbc1-d97e514532ec no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:05:16.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3785" for this suite. Dec 31 13:05:22.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:05:22.708: INFO: namespace secrets-3785 deletion completed in 6.346117372s • [SLOW TEST:16.956 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:05:22.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Dec 31 13:05:22.834: INFO: Waiting up to 5m0s for pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8" in namespace "emptydir-2534" to be "success or failure" Dec 31 13:05:22.856: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.294905ms Dec 31 13:05:24.875: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041076127s Dec 31 13:05:26.902: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067261859s Dec 31 13:05:29.016: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182096376s Dec 31 13:05:31.028: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193271751s Dec 31 13:05:33.048: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213270341s STEP: Saw pod success Dec 31 13:05:33.048: INFO: Pod "pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8" satisfied condition "success or failure" Dec 31 13:05:33.053: INFO: Trying to get logs from node iruya-node pod pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8 container test-container: STEP: delete the pod Dec 31 13:05:33.193: INFO: Waiting for pod pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8 to disappear Dec 31 13:05:33.214: INFO: Pod pod-e8437a8b-5f3a-4060-bd74-1ed27eedc2f8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:05:33.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2534" for this suite. Dec 31 13:05:39.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:05:39.445: INFO: namespace emptydir-2534 deletion completed in 6.226680153s • [SLOW TEST:16.736 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:05:39.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 31 13:05:39.580: INFO: Waiting up to 5m0s for pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c" in namespace "emptydir-7825" to be "success or failure" Dec 31 13:05:39.595: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.713734ms Dec 31 13:05:41.606: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026060733s Dec 31 13:05:43.616: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036065352s Dec 31 13:05:45.664: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083096808s Dec 31 13:05:47.674: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093341245s STEP: Saw pod success Dec 31 13:05:47.674: INFO: Pod "pod-6ba18876-f835-4873-876c-4d3589e69f7c" satisfied condition "success or failure" Dec 31 13:05:47.678: INFO: Trying to get logs from node iruya-node pod pod-6ba18876-f835-4873-876c-4d3589e69f7c container test-container: STEP: delete the pod Dec 31 13:05:47.766: INFO: Waiting for pod pod-6ba18876-f835-4873-876c-4d3589e69f7c to disappear Dec 31 13:05:47.781: INFO: Pod pod-6ba18876-f835-4873-876c-4d3589e69f7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:05:47.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7825" for this suite. Dec 31 13:05:53.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:05:53.969: INFO: namespace emptydir-7825 deletion completed in 6.181641789s • [SLOW TEST:14.523 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:05:53.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:05:54.088: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:06:04.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7540" for this suite. Dec 31 13:06:48.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:06:48.361: INFO: namespace pods-7540 deletion completed in 44.152433497s • [SLOW TEST:54.392 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:06:48.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Dec 31 13:06:48.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Dec 31 13:06:48.651: INFO: stderr: "" Dec 31 13:06:48.651: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:06:48.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5303" for this suite. Dec 31 13:06:54.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:06:54.816: INFO: namespace kubectl-5303 deletion completed in 6.156970998s • [SLOW TEST:6.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:06:54.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4340 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Dec 31 13:06:55.000: INFO: Found 0 stateful pods, waiting for 3 Dec 31 13:07:05.158: INFO: Found 2 stateful pods, waiting for 3 Dec 31 13:07:15.010: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:07:15.010: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:07:15.010: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 31 13:07:25.013: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:07:25.013: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:07:25.013: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Dec 31 13:07:25.058: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 31 13:07:35.123: INFO: Updating stateful set ss2 Dec 31 13:07:35.186: INFO: Waiting for Pod statefulset-4340/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 13:07:45.216: INFO: Waiting for Pod statefulset-4340/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Dec 31 13:07:55.459: INFO: Found 2 stateful pods, waiting for 3 Dec 31 13:08:05.469: INFO: Found 2 stateful pods, waiting for 3 Dec 31 13:08:15.469: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:08:15.469: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:08:15.469: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 31 13:08:25.468: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:08:25.468: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:08:25.468: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 31 13:08:25.503: INFO: Updating stateful set ss2 Dec 31 13:08:25.555: INFO: Waiting for Pod statefulset-4340/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 13:08:35.572: INFO: Waiting for Pod statefulset-4340/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 13:08:45.969: INFO: Updating stateful set ss2 Dec 31 13:08:46.233: INFO: Waiting for StatefulSet statefulset-4340/ss2 to complete update Dec 31 13:08:46.233: INFO: Waiting for Pod statefulset-4340/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Dec 31 13:08:56.252: INFO: Waiting for StatefulSet statefulset-4340/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 31 13:09:06.248: INFO: Deleting all statefulset in ns statefulset-4340 Dec 31 13:09:06.251: INFO: Scaling statefulset ss2 to 0 Dec 31 13:09:46.284: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 13:09:46.292: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:09:46.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4340" for this suite. Dec 31 13:09:54.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:09:54.613: INFO: namespace statefulset-4340 deletion completed in 8.205566849s • [SLOW TEST:179.796 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:09:54.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-33316381-a0dd-40cd-8c05-df9cc7f5c8b3 STEP: Creating a pod to test consume secrets Dec 31 13:09:54.748: INFO: Waiting up to 5m0s for pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64" in namespace "secrets-5650" to be "success or failure" Dec 31 13:09:54.754: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64": Phase="Pending", Reason="", readiness=false. Elapsed: 5.92796ms Dec 31 13:09:56.763: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015722991s Dec 31 13:09:58.774: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0265002s Dec 31 13:10:00.794: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046532919s Dec 31 13:10:02.807: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05954537s STEP: Saw pod success Dec 31 13:10:02.807: INFO: Pod "pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64" satisfied condition "success or failure" Dec 31 13:10:02.814: INFO: Trying to get logs from node iruya-node pod pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64 container secret-volume-test: STEP: delete the pod Dec 31 13:10:02.923: INFO: Waiting for pod pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64 to disappear Dec 31 13:10:02.930: INFO: Pod pod-secrets-c5ee49f4-6823-4884-b3ed-38186b8d3c64 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:10:02.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5650" for this suite. Dec 31 13:10:09.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:10:09.121: INFO: namespace secrets-5650 deletion completed in 6.185479052s • [SLOW TEST:14.508 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:10:09.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:11:03.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3699" for this suite. Dec 31 13:11:09.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:11:09.218: INFO: namespace container-runtime-3699 deletion completed in 6.107314513s • [SLOW TEST:60.097 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:11:09.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-440e0824-870e-4c89-a11a-d9bf2c9c4fee STEP: Creating secret with name secret-projected-all-test-volume-351d89ca-344f-4045-ac13-824ae46daac0 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 31 13:11:09.412: INFO: Waiting up to 5m0s for pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f" in namespace "projected-3478" to be "success or failure" Dec 31 13:11:09.425: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.301449ms Dec 31 13:11:11.434: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022295257s Dec 31 13:11:13.471: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058924087s Dec 31 13:11:15.500: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088374952s Dec 31 13:11:17.515: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Running", Reason="", readiness=true. Elapsed: 8.102788457s Dec 31 13:11:19.527: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.11473575s STEP: Saw pod success Dec 31 13:11:19.527: INFO: Pod "projected-volume-451f22de-d497-439c-b3cf-240ace82779f" satisfied condition "success or failure" Dec 31 13:11:19.533: INFO: Trying to get logs from node iruya-node pod projected-volume-451f22de-d497-439c-b3cf-240ace82779f container projected-all-volume-test: STEP: delete the pod Dec 31 13:11:19.621: INFO: Waiting for pod projected-volume-451f22de-d497-439c-b3cf-240ace82779f to disappear Dec 31 13:11:19.632: INFO: Pod projected-volume-451f22de-d497-439c-b3cf-240ace82779f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:11:19.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3478" for this suite. Dec 31 13:11:25.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:11:25.869: INFO: namespace projected-3478 deletion completed in 6.230753593s • [SLOW TEST:16.650 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:11:25.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 31 13:11:26.072: INFO: Waiting up to 5m0s for pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1" in namespace "downward-api-2830" to be "success or failure" Dec 31 13:11:26.096: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.004976ms Dec 31 13:11:28.107: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03482273s Dec 31 13:11:30.116: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043316847s Dec 31 13:11:32.134: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061648821s Dec 31 13:11:34.142: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069782054s STEP: Saw pod success Dec 31 13:11:34.142: INFO: Pod "downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1" satisfied condition "success or failure" Dec 31 13:11:34.145: INFO: Trying to get logs from node iruya-node pod downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1 container dapi-container: STEP: delete the pod Dec 31 13:11:34.223: INFO: Waiting for pod downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1 to disappear Dec 31 13:11:34.242: INFO: Pod downward-api-ab1b35ac-15d5-4826-9703-62512da4e5b1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:11:34.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2830" for this suite. Dec 31 13:11:40.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:11:40.422: INFO: namespace downward-api-2830 deletion completed in 6.175439333s • [SLOW TEST:14.552 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:11:40.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 31 13:11:49.720: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:11:49.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9679" for this suite. Dec 31 13:11:55.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:11:56.015: INFO: namespace container-runtime-9679 deletion completed in 6.127563602s • [SLOW TEST:15.592 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:11:56.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Dec 31 13:11:56.183: INFO: Pod name pod-release: Found 0 pods out of 1 Dec 31 13:12:01.194: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:12:02.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3941" for this suite. Dec 31 13:12:08.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:12:08.366: INFO: namespace replication-controller-3941 deletion completed in 6.125261823s • [SLOW TEST:12.349 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:12:08.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:12:08.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368" in namespace "projected-8489" to be "success or failure" Dec 31 13:12:08.632: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Pending", Reason="", readiness=false. Elapsed: 103.347913ms Dec 31 13:12:10.645: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116394924s Dec 31 13:12:12.651: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122776836s Dec 31 13:12:14.662: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134235943s Dec 31 13:12:16.671: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142603371s Dec 31 13:12:18.688: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.159722029s STEP: Saw pod success Dec 31 13:12:18.688: INFO: Pod "downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368" satisfied condition "success or failure" Dec 31 13:12:18.694: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368 container client-container: STEP: delete the pod Dec 31 13:12:18.879: INFO: Waiting for pod downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368 to disappear Dec 31 13:12:18.892: INFO: Pod downwardapi-volume-b4441fd5-f0e8-4ba4-9903-d493e782a368 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:12:18.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8489" for this suite. Dec 31 13:12:24.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:12:25.051: INFO: namespace projected-8489 deletion completed in 6.15053778s • [SLOW TEST:16.685 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:12:25.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Dec 31 13:12:25.164: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3030" to be "success or failure" Dec 31 13:12:25.175: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.099249ms Dec 31 13:12:27.201: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037455347s Dec 31 13:12:29.210: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045691965s Dec 31 13:12:31.219: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055418201s Dec 31 13:12:33.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06335294s Dec 31 13:12:35.238: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073896979s STEP: Saw pod success Dec 31 13:12:35.238: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 31 13:12:35.241: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 31 13:12:35.310: INFO: Waiting for pod pod-host-path-test to disappear Dec 31 13:12:35.317: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:12:35.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3030" for this suite. Dec 31 13:12:41.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:12:41.509: INFO: namespace hostpath-3030 deletion completed in 6.183091273s • [SLOW TEST:16.458 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:12:41.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 31 13:12:49.730: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:12:49.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9819" for this suite. Dec 31 13:12:55.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:12:56.039: INFO: namespace container-runtime-9819 deletion completed in 6.239233193s • [SLOW TEST:14.529 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:12:56.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-93c524de-74a5-4675-a924-0bba6d54e9c4 STEP: Creating a pod to test consume configMaps Dec 31 13:12:56.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e" in namespace "configmap-7414" to be "success or failure" Dec 31 13:12:56.245: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.83261ms Dec 31 13:12:58.256: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017964447s Dec 31 13:13:00.265: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027382831s Dec 31 13:13:02.304: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066055422s Dec 31 13:13:04.312: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074616318s STEP: Saw pod success Dec 31 13:13:04.312: INFO: Pod "pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e" satisfied condition "success or failure" Dec 31 13:13:04.316: INFO: Trying to get logs from node iruya-node pod pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e container configmap-volume-test: STEP: delete the pod Dec 31 13:13:04.374: INFO: Waiting for pod pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e to disappear Dec 31 13:13:04.383: INFO: Pod pod-configmaps-59f8d18f-fb47-4200-bd87-5d04510b013e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:13:04.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7414" for this suite. Dec 31 13:13:10.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:13:10.541: INFO: namespace configmap-7414 deletion completed in 6.149720203s • [SLOW TEST:14.501 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:13:10.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Dec 31 13:13:10.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1762' Dec 31 13:13:13.648: INFO: stderr: "" Dec 31 13:13:13.648: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 13:13:13.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1762' Dec 31 13:13:13.975: INFO: stderr: "" Dec 31 13:13:13.975: INFO: stdout: "update-demo-nautilus-66s6h update-demo-nautilus-qwdz7 " Dec 31 13:13:13.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66s6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:14.210: INFO: stderr: "" Dec 31 13:13:14.210: INFO: stdout: "" Dec 31 13:13:14.210: INFO: update-demo-nautilus-66s6h is created but not running Dec 31 13:13:19.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1762' Dec 31 13:13:19.448: INFO: stderr: "" Dec 31 13:13:19.448: INFO: stdout: "update-demo-nautilus-66s6h update-demo-nautilus-qwdz7 " Dec 31 13:13:19.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66s6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:19.595: INFO: stderr: "" Dec 31 13:13:19.595: INFO: stdout: "" Dec 31 13:13:19.595: INFO: update-demo-nautilus-66s6h is created but not running Dec 31 13:13:24.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1762' Dec 31 13:13:24.852: INFO: stderr: "" Dec 31 13:13:24.852: INFO: stdout: "update-demo-nautilus-66s6h update-demo-nautilus-qwdz7 " Dec 31 13:13:24.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66s6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:24.973: INFO: stderr: "" Dec 31 13:13:24.973: INFO: stdout: "true" Dec 31 13:13:24.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66s6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:25.080: INFO: stderr: "" Dec 31 13:13:25.080: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:13:25.080: INFO: validating pod update-demo-nautilus-66s6h Dec 31 13:13:25.096: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:13:25.096: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:13:25.096: INFO: update-demo-nautilus-66s6h is verified up and running Dec 31 13:13:25.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwdz7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:25.213: INFO: stderr: "" Dec 31 13:13:25.213: INFO: stdout: "true" Dec 31 13:13:25.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qwdz7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:13:25.302: INFO: stderr: "" Dec 31 13:13:25.302: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:13:25.302: INFO: validating pod update-demo-nautilus-qwdz7 Dec 31 13:13:25.307: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:13:25.307: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:13:25.307: INFO: update-demo-nautilus-qwdz7 is verified up and running STEP: rolling-update to new replication controller Dec 31 13:13:25.309: INFO: scanned /root for discovery docs: Dec 31 13:13:25.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1762' Dec 31 13:13:56.973: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Dec 31 13:13:56.973: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 13:13:56.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1762' Dec 31 13:13:57.233: INFO: stderr: "" Dec 31 13:13:57.233: INFO: stdout: "update-demo-kitten-vhn4q update-demo-kitten-xv8n9 update-demo-nautilus-qwdz7 " STEP: Replicas for name=update-demo: expected=2 actual=3 Dec 31 13:14:02.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1762' Dec 31 13:14:02.319: INFO: stderr: "" Dec 31 13:14:02.319: INFO: stdout: "update-demo-kitten-vhn4q update-demo-kitten-xv8n9 " Dec 31 13:14:02.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vhn4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:14:02.415: INFO: stderr: "" Dec 31 13:14:02.415: INFO: stdout: "true" Dec 31 13:14:02.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vhn4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:14:02.595: INFO: stderr: "" Dec 31 13:14:02.595: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 31 13:14:02.595: INFO: validating pod update-demo-kitten-vhn4q Dec 31 13:14:02.617: INFO: got data: { "image": "kitten.jpg" } Dec 31 13:14:02.617: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 31 13:14:02.617: INFO: update-demo-kitten-vhn4q is verified up and running Dec 31 13:14:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xv8n9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:14:02.731: INFO: stderr: "" Dec 31 13:14:02.731: INFO: stdout: "true" Dec 31 13:14:02.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xv8n9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1762' Dec 31 13:14:02.836: INFO: stderr: "" Dec 31 13:14:02.836: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Dec 31 13:14:02.836: INFO: validating pod update-demo-kitten-xv8n9 Dec 31 13:14:02.878: INFO: got data: { "image": "kitten.jpg" } Dec 31 13:14:02.878: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Dec 31 13:14:02.878: INFO: update-demo-kitten-xv8n9 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:14:02.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1762" for this suite. Dec 31 13:14:30.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:14:31.061: INFO: namespace kubectl-1762 deletion completed in 28.174428616s • [SLOW TEST:80.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:14:31.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9008 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9008 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9008 Dec 31 13:14:31.193: INFO: Found 0 stateful pods, waiting for 1 Dec 31 13:14:41.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Dec 31 13:14:41.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 13:14:41.942: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 31 13:14:41.942: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 13:14:41.942: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 31 13:14:41.948: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Dec 31 13:14:51.962: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 31 13:14:51.962: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 13:14:52.111: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:14:52.111: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:14:52.111: INFO: ss-1 Pending [] Dec 31 13:14:52.111: INFO: Dec 31 13:14:52.111: INFO: StatefulSet ss has not reached scale 3, at 2 Dec 31 13:14:53.868: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968610185s Dec 31 13:14:55.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.211241241s Dec 31 13:14:56.135: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.962107261s Dec 31 13:14:58.514: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.944633491s Dec 31 13:14:59.525: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.564806185s Dec 31 13:15:00.535: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.553653205s Dec 31 13:15:01.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 544.344272ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9008 Dec 31 13:15:02.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:03.090: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Dec 31 13:15:03.091: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 31 13:15:03.091: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 31 13:15:03.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:03.502: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 31 13:15:03.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 31 13:15:03.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 31 13:15:03.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:04.253: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Dec 31 13:15:04.253: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Dec 31 13:15:04.253: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Dec 31 13:15:04.261: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:15:04.261: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Dec 31 13:15:04.261: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Dec 31 13:15:04.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 13:15:04.896: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 31 13:15:04.896: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 13:15:04.896: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 31 13:15:04.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 13:15:05.200: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 31 13:15:05.200: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 13:15:05.200: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 31 13:15:05.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Dec 31 13:15:05.688: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Dec 31 13:15:05.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Dec 31 13:15:05.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Dec 31 13:15:05.688: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 13:15:05.762: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Dec 31 13:15:15.780: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Dec 31 13:15:15.780: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Dec 31 13:15:15.780: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Dec 31 13:15:15.845: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:15.845: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:15.845: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:15.845: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:15.845: INFO: Dec 31 13:15:15.845: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:17.680: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:17.680: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:17.680: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:17.680: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:17.680: INFO: Dec 31 13:15:17.680: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:18.712: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:18.713: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:18.713: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:18.713: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:18.713: INFO: Dec 31 13:15:18.713: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:19.726: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:19.726: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:19.726: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:19.726: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:19.726: INFO: Dec 31 13:15:19.726: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:20.736: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:20.736: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:20.736: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:20.736: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:20.736: INFO: Dec 31 13:15:20.736: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:21.747: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:21.747: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:21.747: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:21.747: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:21.747: INFO: Dec 31 13:15:21.747: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:22.764: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:22.764: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:22.764: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:22.764: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:22.764: INFO: Dec 31 13:15:22.764: INFO: StatefulSet ss has not reached scale 0, at 3 Dec 31 13:15:23.780: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:23.781: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:23.781: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:23.781: INFO: Dec 31 13:15:23.781: INFO: StatefulSet ss has not reached scale 0, at 2 Dec 31 13:15:24.809: INFO: POD NODE PHASE GRACE CONDITIONS Dec 31 13:15:24.809: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:31 +0000 UTC }] Dec 31 13:15:24.809: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:15:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:14:52 +0000 UTC }] Dec 31 13:15:24.809: INFO: Dec 31 13:15:24.809: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9008 Dec 31 13:15:25.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:26.133: INFO: rc: 1 Dec 31 13:15:26.133: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002497f50 exit status 1 true [0xc000010d48 0xc000010d80 0xc000010dc0] [0xc000010d48 0xc000010d80 0xc000010dc0] [0xc000010d70 0xc000010da0] [0xba6c50 0xba6c50] 0xc00244df80 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Dec 31 13:15:36.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:36.268: INFO: rc: 1 Dec 31 13:15:36.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e030 exit status 1 true [0xc000010dd8 0xc000010e40 0xc000010e68] [0xc000010dd8 0xc000010e40 0xc000010e68] [0xc000010e38 0xc000010e50] [0xba6c50 0xba6c50] 0xc003044420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:15:46.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:46.509: INFO: rc: 1 Dec 31 13:15:46.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e0f0 exit status 1 true [0xc000010e98 0xc000010f08 0xc000010f28] [0xc000010e98 0xc000010f08 0xc000010f28] [0xc000010ed8 0xc000010f20] [0xba6c50 0xba6c50] 0xc003044720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:15:56.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:15:56.745: INFO: rc: 1 Dec 31 13:15:56.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e0c0 exit status 1 true [0xc002060000 0xc002060018 0xc002060030] [0xc002060000 0xc002060018 0xc002060030] [0xc002060010 0xc002060028] [0xba6c50 0xba6c50] 0xc002c165a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:06.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:06.904: INFO: rc: 1 Dec 31 13:16:06.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e240 exit status 1 true [0xc000010f40 0xc000010f78 0xc000010ff0] [0xc000010f40 0xc000010f78 0xc000010ff0] [0xc000010f70 0xc000010fc0] [0xba6c50 0xba6c50] 0xc003044c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:16.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:17.081: INFO: rc: 1 Dec 31 13:16:17.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e180 exit status 1 true [0xc002060038 0xc002060050 0xc002060068] [0xc002060038 0xc002060050 0xc002060068] [0xc002060048 0xc002060060] [0xba6c50 0xba6c50] 0xc002c17740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:27.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:27.211: INFO: rc: 1 Dec 31 13:16:27.212: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e360 exit status 1 true [0xc000011000 0xc000011048 0xc000011070] [0xc000011000 0xc000011048 0xc000011070] [0xc000011038 0xc000011060] [0xba6c50 0xba6c50] 0xc0030451a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:37.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:37.399: INFO: rc: 1 Dec 31 13:16:37.399: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e420 exit status 1 true [0xc002060070 0xc002060088 0xc0020600a0] [0xc002060070 0xc002060088 0xc0020600a0] [0xc002060080 0xc002060098] [0xba6c50 0xba6c50] 0xc002c17a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:47.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:47.613: INFO: rc: 1 Dec 31 13:16:47.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e450 exit status 1 true [0xc000011088 0xc000011100 0xc000011148] [0xc000011088 0xc000011100 0xc000011148] [0xc0000110d0 0xc000011128] [0xba6c50 0xba6c50] 0xc003045740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:16:57.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:16:57.907: INFO: rc: 1 Dec 31 13:16:57.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e510 exit status 1 true [0xc000011160 0xc000011190 0xc0000111f0] [0xc000011160 0xc000011190 0xc0000111f0] [0xc000011180 0xc0000111d8] [0xba6c50 0xba6c50] 0xc003045b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:07.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:08.087: INFO: rc: 1 Dec 31 13:17:08.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00141e5d0 exit status 1 true [0xc000011220 0xc000011268 0xc0000112a0] [0xc000011220 0xc000011268 0xc0000112a0] [0xc000011248 0xc000011290] [0xba6c50 0xba6c50] 0xc0024564e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:18.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:18.295: INFO: rc: 1 Dec 31 13:17:18.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860a50 exit status 1 true [0xc00052d6b8 0xc00052dcf0 0xc00052dea0] [0xc00052d6b8 0xc00052dcf0 0xc00052dea0] [0xc00052dc58 0xc00052de38] [0xba6c50 0xba6c50] 0xc001fc22a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:28.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:28.512: INFO: rc: 1 Dec 31 13:17:28.512: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002496060 exit status 1 true [0xc0003601f0 0xc002060008 0xc002060020] [0xc0003601f0 0xc002060008 0xc002060020] [0xc002060000 0xc002060018] [0xba6c50 0xba6c50] 0xc003044540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:38.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:38.684: INFO: rc: 1 Dec 31 13:17:38.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002496150 exit status 1 true [0xc002060028 0xc002060040 0xc002060058] [0xc002060028 0xc002060040 0xc002060058] [0xc002060038 0xc002060050] [0xba6c50 0xba6c50] 0xc003044840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:48.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:48.842: INFO: rc: 1 Dec 31 13:17:48.842: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860090 exit status 1 true [0xc00052c3a8 0xc00052c5f0 0xc00052cad0] [0xc00052c3a8 0xc00052c5f0 0xc00052cad0] [0xc00052c530 0xc00052ca60] [0xba6c50 0xba6c50] 0xc00244c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:17:58.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:17:59.015: INFO: rc: 1 Dec 31 13:17:59.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860150 exit status 1 true [0xc00052cc68 0xc00052cec8 0xc00052d158] [0xc00052cc68 0xc00052cec8 0xc00052d158] [0xc00052ce20 0xc00052d020] [0xba6c50 0xba6c50] 0xc00244cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:09.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:18:09.154: INFO: rc: 1 Dec 31 13:18:09.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860240 exit status 1 true [0xc00052d410 0xc00052dc58 0xc00052de38] [0xc00052d410 0xc00052dc58 0xc00052de38] [0xc00052d9e8 0xc00052dda8] [0xba6c50 0xba6c50] 0xc00244de60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:19.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:18:19.350: INFO: rc: 1 Dec 31 13:18:19.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860330 exit status 1 true [0xc00052dea0 0xc001042010 0xc001042078] [0xc00052dea0 0xc001042010 0xc001042078] [0xc001042000 0xc001042068] [0xba6c50 0xba6c50] 0xc002376c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:29.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:18:29.535: INFO: rc: 1 Dec 31 13:18:29.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e0f0 exit status 1 true [0xc000010058 0xc000010b98 0xc000010c58] [0xc000010058 0xc000010b98 0xc000010c58] [0xc000010b70 0xc000010c18] [0xba6c50 0xba6c50] 0xc002c165a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:39.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:18:39.719: INFO: rc: 1 Dec 31 13:18:39.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e360 exit status 1 true [0xc000010c60 0xc000010cc8 0xc000010d58] [0xc000010c60 0xc000010cc8 0xc000010d58] [0xc000010c80 0xc000010d48] [0xba6c50 0xba6c50] 0xc002c17740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:49.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:18:49.894: INFO: rc: 1 Dec 31 13:18:49.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860420 exit status 1 true [0xc001042088 0xc0010420e0 0xc0010421b0] [0xc001042088 0xc0010420e0 0xc0010421b0] [0xc0010420c0 0xc001042178] [0xba6c50 0xba6c50] 0xc001fc2120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:18:59.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:00.057: INFO: rc: 1 Dec 31 13:19:00.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002496210 exit status 1 true [0xc002060060 0xc002060078 0xc002060090] [0xc002060060 0xc002060078 0xc002060090] [0xc002060070 0xc002060088] [0xba6c50 0xba6c50] 0xc003044e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:19:10.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:10.254: INFO: rc: 1 Dec 31 13:19:10.254: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002496360 exit status 1 true [0xc002060098 0xc0020600b0 0xc0020600c8] [0xc002060098 0xc0020600b0 0xc0020600c8] [0xc0020600a8 0xc0020600c0] [0xba6c50 0xba6c50] 0xc0030452c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:19:20.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:20.468: INFO: rc: 1 Dec 31 13:19:20.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001860570 exit status 1 true [0xc0010421d0 0xc001042290 0xc001042320] [0xc0010421d0 0xc001042290 0xc001042320] [0xc001042240 0xc0010422d8] [0xba6c50 0xba6c50] 0xc001fc27e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:19:30.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:30.685: INFO: rc: 1 Dec 31 13:19:30.686: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d4e090 exit status 1 true [0xc00052c3a8 0xc00052c5f0 0xc00052cad0] [0xc00052c3a8 0xc00052c5f0 0xc00052cad0] [0xc00052c530 0xc00052ca60] [0xba6c50 0xba6c50] 0xc0023772c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:19:40.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:40.833: INFO: rc: 1 Dec 31 13:19:40.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0018600c0 exit status 1 true [0xc000186000 0xc000010058 0xc000010b98] [0xc000186000 0xc000010058 0xc000010b98] [0xc0003601f0 0xc000010b70] [0xba6c50 0xba6c50] 0xc00244c780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:19:50.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:19:51.017: INFO: rc: 1 Dec 31 13:19:51.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00198e0c0 exit status 1 true [0xc001042000 0xc001042068 0xc001042090] [0xc001042000 0xc001042068 0xc001042090] [0xc001042040 0xc001042088] [0xba6c50 0xba6c50] 0xc002c16300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:20:01.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:20:01.216: INFO: rc: 1 Dec 31 13:20:01.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00198e1b0 exit status 1 true [0xc0010420c0 0xc001042178 0xc0010423a0] [0xc0010420c0 0xc001042178 0xc0010423a0] [0xc001042160 0xc001042370] [0xba6c50 0xba6c50] 0xc002c175c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:20:11.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:20:11.395: INFO: rc: 1 Dec 31 13:20:11.395: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00198e270 exit status 1 true [0xc0010423d0 0xc001042440 0xc0010424f0] [0xc0010423d0 0xc001042440 0xc0010424f0] [0xc001042420 0xc001042490] [0xba6c50 0xba6c50] 0xc002c178c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:20:21.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:20:21.642: INFO: rc: 1 Dec 31 13:20:21.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024960c0 exit status 1 true [0xc002060000 0xc002060018 0xc002060030] [0xc002060000 0xc002060018 0xc002060030] [0xc002060010 0xc002060028] [0xba6c50 0xba6c50] 0xc001fc2a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Dec 31 13:20:31.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9008 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Dec 31 13:20:31.847: INFO: rc: 1 Dec 31 13:20:31.847: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Dec 31 13:20:31.847: INFO: Scaling statefulset ss to 0 Dec 31 13:20:31.876: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 31 13:20:31.881: INFO: Deleting all statefulset in ns statefulset-9008 Dec 31 13:20:31.885: INFO: Scaling statefulset ss to 0 Dec 31 13:20:31.895: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 13:20:31.899: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:20:31.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9008" for this suite. Dec 31 13:20:39.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:20:40.066: INFO: namespace statefulset-9008 deletion completed in 8.143999026s • [SLOW TEST:369.005 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:20:40.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:20:40.137: INFO: Creating ReplicaSet my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d Dec 31 13:20:40.212: INFO: Pod name my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d: Found 0 pods out of 1 Dec 31 13:20:45.220: INFO: Pod name my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d: Found 1 pods out of 1 Dec 31 13:20:45.220: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d" is running Dec 31 13:20:47.231: INFO: Pod "my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d-88gs7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:20:40 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:20:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:20:40 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:20:40 +0000 UTC Reason: Message:}]) Dec 31 13:20:47.231: INFO: Trying to dial the pod Dec 31 13:20:52.284: INFO: Controller my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d: Got expected result from replica 1 [my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d-88gs7]: "my-hostname-basic-c02f5651-3664-4110-9fbf-9f2ce6f1319d-88gs7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:20:52.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1929" for this suite. Dec 31 13:20:58.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:20:58.761: INFO: namespace replicaset-1929 deletion completed in 6.472348069s • [SLOW TEST:18.695 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:20:58.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-8a7973d0-d846-4802-8eab-2a656d1901f5 STEP: Creating configMap with name cm-test-opt-upd-48a579ae-b17e-4f58-a5ad-0d9709cb5e9c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8a7973d0-d846-4802-8eab-2a656d1901f5 STEP: Updating configmap cm-test-opt-upd-48a579ae-b17e-4f58-a5ad-0d9709cb5e9c STEP: Creating configMap with name cm-test-opt-create-45a07dad-e530-42ee-9e85-4d7d1110a7de STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:21:17.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9394" for this suite. Dec 31 13:21:39.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:21:39.543: INFO: namespace projected-9394 deletion completed in 22.212542594s • [SLOW TEST:40.781 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:21:39.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 31 13:21:39.662: INFO: Waiting up to 5m0s for pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223" in namespace "downward-api-636" to be "success or failure" Dec 31 13:21:39.669: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106068ms Dec 31 13:21:41.679: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01637073s Dec 31 13:21:43.688: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025348162s Dec 31 13:21:45.697: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03395185s Dec 31 13:21:47.704: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041126079s STEP: Saw pod success Dec 31 13:21:47.704: INFO: Pod "downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223" satisfied condition "success or failure" Dec 31 13:21:47.709: INFO: Trying to get logs from node iruya-node pod downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223 container dapi-container: STEP: delete the pod Dec 31 13:21:47.803: INFO: Waiting for pod downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223 to disappear Dec 31 13:21:47.956: INFO: Pod downward-api-1be3ed21-5a94-4b19-814b-7cbd0c8fc223 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:21:47.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-636" for this suite. Dec 31 13:21:54.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:21:54.273: INFO: namespace downward-api-636 deletion completed in 6.307590257s • [SLOW TEST:14.730 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:21:54.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Dec 31 13:21:54.414: INFO: Waiting up to 5m0s for pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf" in namespace "var-expansion-6518" to be "success or failure" Dec 31 13:21:54.420: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.342172ms Dec 31 13:21:56.439: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024278166s Dec 31 13:21:58.454: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039157327s Dec 31 13:22:00.465: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050332338s Dec 31 13:22:02.476: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06115376s STEP: Saw pod success Dec 31 13:22:02.476: INFO: Pod "var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf" satisfied condition "success or failure" Dec 31 13:22:02.481: INFO: Trying to get logs from node iruya-node pod var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf container dapi-container: STEP: delete the pod Dec 31 13:22:02.642: INFO: Waiting for pod var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf to disappear Dec 31 13:22:02.649: INFO: Pod var-expansion-54aac94a-9fde-47dc-b33b-ebe2d8485adf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:22:02.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6518" for this suite. Dec 31 13:22:08.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:22:08.830: INFO: namespace var-expansion-6518 deletion completed in 6.175214411s • [SLOW TEST:14.556 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:22:08.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 31 13:22:08.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5851' Dec 31 13:22:09.128: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 31 13:22:09.128: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Dec 31 13:22:09.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-5851' Dec 31 13:22:09.395: INFO: stderr: "" Dec 31 13:22:09.396: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:22:09.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5851" for this suite. Dec 31 13:22:15.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:22:15.551: INFO: namespace kubectl-5851 deletion completed in 6.149031438s • [SLOW TEST:6.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:22:15.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Dec 31 13:22:15.668: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769462,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Dec 31 13:22:15.668: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769463,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Dec 31 13:22:15.668: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769464,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Dec 31 13:22:25.755: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769479,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Dec 31 13:22:25.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769480,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Dec 31 13:22:25.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1828,SelfLink:/api/v1/namespaces/watch-1828/configmaps/e2e-watch-test-label-changed,UID:c71c8dd7-f1a9-4113-81f9-0f8b34403040,ResourceVersion:18769481,Generation:0,CreationTimestamp:2019-12-31 13:22:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:22:25.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1828" for this suite. Dec 31 13:22:31.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:22:31.961: INFO: namespace watch-1828 deletion completed in 6.196804001s • [SLOW TEST:16.410 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:22:31.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Dec 31 13:22:40.661: INFO: Successfully updated pod "annotationupdate21f1c1f0-b6cb-4892-bf39-923754da9227" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:22:42.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5381" for this suite. Dec 31 13:23:04.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:23:04.994: INFO: namespace downward-api-5381 deletion completed in 22.222226075s • [SLOW TEST:33.033 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:23:04.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 31 13:23:13.645: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2b2198b5-1272-4740-8360-7afe63332310" Dec 31 13:23:13.646: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2b2198b5-1272-4740-8360-7afe63332310" in namespace "pods-9372" to be "terminated due to deadline exceeded" Dec 31 13:23:13.657: INFO: Pod "pod-update-activedeadlineseconds-2b2198b5-1272-4740-8360-7afe63332310": Phase="Running", Reason="", readiness=true. Elapsed: 11.627858ms Dec 31 13:23:15.672: INFO: Pod "pod-update-activedeadlineseconds-2b2198b5-1272-4740-8360-7afe63332310": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02629441s Dec 31 13:23:15.672: INFO: Pod "pod-update-activedeadlineseconds-2b2198b5-1272-4740-8360-7afe63332310" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:23:15.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9372" for this suite. Dec 31 13:23:21.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:23:21.856: INFO: namespace pods-9372 deletion completed in 6.177746392s • [SLOW TEST:16.861 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:23:21.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1231 13:23:36.842764 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 31 13:23:36.842: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:23:36.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4183" for this suite. Dec 31 13:23:46.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:23:46.466: INFO: namespace gc-4183 deletion completed in 9.617689208s • [SLOW TEST:24.610 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:23:46.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 31 13:23:46.903: INFO: Waiting up to 5m0s for pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec" in namespace "emptydir-5067" to be "success or failure" Dec 31 13:23:46.937: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Pending", Reason="", readiness=false. Elapsed: 34.404354ms Dec 31 13:23:49.159: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255709653s Dec 31 13:23:51.184: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280860545s Dec 31 13:23:53.192: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.288804923s Dec 31 13:23:55.257: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.354388361s Dec 31 13:23:57.276: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.372556238s STEP: Saw pod success Dec 31 13:23:57.276: INFO: Pod "pod-4600984d-76df-4182-a4e8-c55df7f8afec" satisfied condition "success or failure" Dec 31 13:23:57.280: INFO: Trying to get logs from node iruya-node pod pod-4600984d-76df-4182-a4e8-c55df7f8afec container test-container: STEP: delete the pod Dec 31 13:23:57.440: INFO: Waiting for pod pod-4600984d-76df-4182-a4e8-c55df7f8afec to disappear Dec 31 13:23:57.453: INFO: Pod pod-4600984d-76df-4182-a4e8-c55df7f8afec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:23:57.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5067" for this suite. Dec 31 13:24:03.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:24:03.625: INFO: namespace emptydir-5067 deletion completed in 6.165595101s • [SLOW TEST:17.157 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:24:03.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:24:03.728: INFO: Creating deployment "nginx-deployment" Dec 31 13:24:03.736: INFO: Waiting for observed generation 1 Dec 31 13:24:06.436: INFO: Waiting for all required pods to come up Dec 31 13:24:06.466: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Dec 31 13:24:33.296: INFO: Waiting for deployment "nginx-deployment" to complete Dec 31 13:24:33.342: INFO: Updating deployment "nginx-deployment" with a non-existent image Dec 31 13:24:33.356: INFO: Updating deployment nginx-deployment Dec 31 13:24:33.357: INFO: Waiting for observed generation 2 Dec 31 13:24:37.664: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Dec 31 13:24:37.923: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Dec 31 13:24:37.992: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 31 13:24:38.161: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Dec 31 13:24:38.161: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Dec 31 13:24:38.163: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Dec 31 13:24:38.169: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Dec 31 13:24:38.169: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Dec 31 13:24:38.179: INFO: Updating deployment nginx-deployment Dec 31 13:24:38.179: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Dec 31 13:24:38.345: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Dec 31 13:24:39.651: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Dec 31 13:24:41.984: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9044,SelfLink:/apis/apps/v1/namespaces/deployment-9044/deployments/nginx-deployment,UID:6c0deec1-6e8d-4876-8647-c8195d345d77,ResourceVersion:18770061,Generation:3,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2019-12-31 13:24:36 +0000 UTC 2019-12-31 13:24:03 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2019-12-31 13:24:39 +0000 UTC 2019-12-31 13:24:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Dec 31 13:24:44.889: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9044,SelfLink:/apis/apps/v1/namespaces/deployment-9044/replicasets/nginx-deployment-55fb7cb77f,UID:0aa1d9c7-47f8-4a31-a2f0-94b431796a48,ResourceVersion:18770040,Generation:3,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6c0deec1-6e8d-4876-8647-c8195d345d77 0xc002a6db07 0xc002a6db08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Dec 31 13:24:44.889: INFO: All old ReplicaSets of Deployment "nginx-deployment": Dec 31 13:24:44.889: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9044,SelfLink:/apis/apps/v1/namespaces/deployment-9044/replicasets/nginx-deployment-7b8c6f4498,UID:802b701c-865c-4db8-9407-d79e78520323,ResourceVersion:18770086,Generation:3,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6c0deec1-6e8d-4876-8647-c8195d345d77 0xc002a6dbd7 0xc002a6dbd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Dec 31 13:24:46.581: INFO: Pod "nginx-deployment-55fb7cb77f-2jxxf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2jxxf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-2jxxf,UID:d956833f-c1a2-49cd-9f99-15f92fd32697,ResourceVersion:18770067,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61487 0xc001b61488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b614f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.582: INFO: Pod "nginx-deployment-55fb7cb77f-2l4rb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2l4rb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-2l4rb,UID:55d8547f-00c4-4c5a-83ed-94036bacc9bb,ResourceVersion:18770069,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b615a7 0xc001b615a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61620} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.582: INFO: Pod "nginx-deployment-55fb7cb77f-5q4x5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5q4x5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-5q4x5,UID:16bcbcef-ecba-456f-827f-939e581174cc,ResourceVersion:18770029,Generation:0,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b616c7 0xc001b616c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61730} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-31 13:24:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.583: INFO: Pod "nginx-deployment-55fb7cb77f-6sm8k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6sm8k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-6sm8k,UID:5ff39f82-4084-4377-8750-b5f9817c4665,ResourceVersion:18770097,Generation:0,CreationTimestamp:2019-12-31 13:24:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61837 0xc001b61838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b618c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b618e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.583: INFO: Pod "nginx-deployment-55fb7cb77f-8cqkt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8cqkt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-8cqkt,UID:1274b9b7-2aa2-4dd5-9d99-1111932818ae,ResourceVersion:18770021,Generation:0,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61967 0xc001b61968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b619e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 13:24:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.583: INFO: Pod "nginx-deployment-55fb7cb77f-9jdt5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9jdt5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-9jdt5,UID:21b0c0a2-b6eb-4f6a-a551-08d71c2ca4a6,ResourceVersion:18770092,Generation:0,CreationTimestamp:2019-12-31 13:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61ad7 0xc001b61ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.583: INFO: Pod "nginx-deployment-55fb7cb77f-jbs9w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jbs9w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-jbs9w,UID:17dcee72-e739-462b-8e9b-616ee00761a7,ResourceVersion:18770079,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61bf7 0xc001b61bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61c70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61c90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.584: INFO: Pod "nginx-deployment-55fb7cb77f-jrzrk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jrzrk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-jrzrk,UID:8bf57148-11d7-4e66-a091-541deedcf7e3,ResourceVersion:18770031,Generation:0,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61d17 0xc001b61d18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 13:24:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.584: INFO: Pod "nginx-deployment-55fb7cb77f-n7nd2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n7nd2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-n7nd2,UID:fefcf68a-00d6-47a5-bcd0-e6ea10a86ae1,ResourceVersion:18770088,Generation:0,CreationTimestamp:2019-12-31 13:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61ea7 0xc001b61ea8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b61f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b61f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.584: INFO: Pod "nginx-deployment-55fb7cb77f-q4n5f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q4n5f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-q4n5f,UID:54f15540-411e-4044-a7af-19a9de3397fa,ResourceVersion:18770089,Generation:0,CreationTimestamp:2019-12-31 13:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc001b61fb7 0xc001b61fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e020} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.584: INFO: Pod "nginx-deployment-55fb7cb77f-qzqfb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-qzqfb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-qzqfb,UID:9f2d1a6d-f2a1-4fe2-b75e-c679c6f51f5c,ResourceVersion:18770087,Generation:0,CreationTimestamp:2019-12-31 13:24:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc00280e0c7 0xc00280e0c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e140} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.585: INFO: Pod "nginx-deployment-55fb7cb77f-rxqv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rxqv4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-rxqv4,UID:828c06a3-717f-41f2-89af-998993c055ac,ResourceVersion:18770011,Generation:0,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc00280e1f7 0xc00280e1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e270} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 13:24:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.585: INFO: Pod "nginx-deployment-55fb7cb77f-x2x4k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x2x4k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-55fb7cb77f-x2x4k,UID:1b604343-4f7f-4aa8-a53f-08fb4b6f7258,ResourceVersion:18770035,Generation:0,CreationTimestamp:2019-12-31 13:24:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0aa1d9c7-47f8-4a31-a2f0-94b431796a48 0xc00280e377 0xc00280e378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:34 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-31 13:24:34 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.585: INFO: Pod "nginx-deployment-7b8c6f4498-2jnpv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2jnpv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-2jnpv,UID:e377d7dc-9eb8-42b8-8f21-f3711178e31c,ResourceVersion:18770065,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280e4d7 0xc00280e4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e540} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.585: INFO: Pod "nginx-deployment-7b8c6f4498-8mf7z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8mf7z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-8mf7z,UID:ea646036-635d-4df1-9fa7-927413b51e55,ResourceVersion:18769941,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280e5f7 0xc00280e5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e670} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ed7812534b2ff8449d106d20a7723c2adb0fbc957df02b53d51dd4653e7226b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.586: INFO: Pod "nginx-deployment-7b8c6f4498-96p6m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-96p6m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-96p6m,UID:e3ae8e03-9f5a-48fa-9b7a-6c97647da961,ResourceVersion:18770085,Generation:0,CreationTimestamp:2019-12-31 13:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280e767 0xc00280e768}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e7e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 13:24:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.586: INFO: Pod "nginx-deployment-7b8c6f4498-98wfk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-98wfk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-98wfk,UID:b6b91f75-141d-4681-8ca0-26f9efc75b6e,ResourceVersion:18770072,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280e8d7 0xc00280e8d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280e970} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280e990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.586: INFO: Pod "nginx-deployment-7b8c6f4498-9hvsk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9hvsk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-9hvsk,UID:24ab43dd-e939-4b94-a52a-95cfc345b764,ResourceVersion:18769970,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280ea17 0xc00280ea18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280ea80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280eab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://98c677c12e88a8ce967fdfa257da17d0e8aba0345e61b0bb1f531867a9950005}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.587: INFO: Pod "nginx-deployment-7b8c6f4498-bjw74" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bjw74,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-bjw74,UID:5feddedb-ee18-491a-b5ec-61b8a0419181,ResourceVersion:18770066,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280eb87 0xc00280eb88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280ebf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280ec10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.588: INFO: Pod "nginx-deployment-7b8c6f4498-bpl7w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bpl7w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-bpl7w,UID:3ddadb54-53ab-40c7-813d-1c0aaa09d240,ResourceVersion:18770068,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280ec97 0xc00280ec98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280ed10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280ed30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.588: INFO: Pod "nginx-deployment-7b8c6f4498-bscn8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bscn8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-bscn8,UID:ff5a2873-ae1c-43dd-a6fd-e0247151ee05,ResourceVersion:18769949,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280edb7 0xc00280edb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280ee40} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280ee60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-31 13:24:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://da9af5793a860f42e1ca141e858ccde0425ce9634162cf21ffd543c36b762fb2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.588: INFO: Pod "nginx-deployment-7b8c6f4498-clggl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-clggl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-clggl,UID:0c4fc75a-cea2-49c2-afe7-0b1d0696736c,ResourceVersion:18769938,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280ef37 0xc00280ef38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280efb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280efd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:28 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e48eee6a526c0cb8783e1ca48f7df5a649a2708c787530db269492477a911983}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.588: INFO: Pod "nginx-deployment-7b8c6f4498-dhhkn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dhhkn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-dhhkn,UID:901fc2d6-7c0a-458b-beb6-a3f5e364c406,ResourceVersion:18769964,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f0b7 0xc00280f0b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f120} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:27 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1cbadfac03fdf1eba1a39fc1905b1b14e5aec3a18f455495cbd9cc0a2dfe9cd6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.589: INFO: Pod "nginx-deployment-7b8c6f4498-fwfsl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fwfsl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-fwfsl,UID:3267cd7a-19fa-4b55-83d3-28130a206dda,ResourceVersion:18770103,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f217 0xc00280f218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f290} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 13:24:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.589: INFO: Pod "nginx-deployment-7b8c6f4498-hhwm8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hhwm8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-hhwm8,UID:d1859550-7a88-4812-a3b1-22ba4da4f960,ResourceVersion:18770083,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f377 0xc00280f378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.589: INFO: Pod "nginx-deployment-7b8c6f4498-j2pmc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j2pmc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-j2pmc,UID:c2ebad6c-24e1-4ecf-8d0e-807040a6b0b4,ResourceVersion:18770104,Generation:0,CreationTimestamp:2019-12-31 13:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f497 0xc00280f498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f500} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-31 13:24:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.589: INFO: Pod "nginx-deployment-7b8c6f4498-mjshn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mjshn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-mjshn,UID:c57cc190-30e2-45a0-8777-4492998324ba,ResourceVersion:18770081,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f607 0xc00280f608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f670} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.590: INFO: Pod "nginx-deployment-7b8c6f4498-nk7cx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nk7cx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-nk7cx,UID:ae425eb5-e454-4d39-9999-7d0b147897d7,ResourceVersion:18770080,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f717 0xc00280f718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f790} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.590: INFO: Pod "nginx-deployment-7b8c6f4498-p7kwn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p7kwn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-p7kwn,UID:56613bfd-49f8-46c4-97bd-d7c9f05c29e0,ResourceVersion:18769958,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f847 0xc00280f848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280f8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280f8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ba2ef2659257e41afe9727711f6902d2923e14e7a84c68540230c3f9d62f70f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.591: INFO: Pod "nginx-deployment-7b8c6f4498-pnjwg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pnjwg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-pnjwg,UID:d148c122-303d-4da8-85a6-de3ca96fa351,ResourceVersion:18769952,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280f9a7 0xc00280f9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280fa20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280fa40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2019-12-31 13:24:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8373acbafc0f81a7951ae54fcbf0a36dee9919d02393261f1d5d2f583029013f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.591: INFO: Pod "nginx-deployment-7b8c6f4498-pqn4s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pqn4s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-pqn4s,UID:d588ac64-9e4e-4a4d-ac71-9bf3b78b71c3,ResourceVersion:18770082,Generation:0,CreationTimestamp:2019-12-31 13:24:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280fb17 0xc00280fb18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280fb90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280fbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.591: INFO: Pod "nginx-deployment-7b8c6f4498-xmnbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xmnbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-xmnbf,UID:85bbd10f-78ca-47e3-bea3-000c418dbb8d,ResourceVersion:18770084,Generation:0,CreationTimestamp:2019-12-31 13:24:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280fc37 0xc00280fc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280fca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280fcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:38 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2019-12-31 13:24:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Dec 31 13:24:46.592: INFO: Pod "nginx-deployment-7b8c6f4498-z9tg2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z9tg2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9044,SelfLink:/api/v1/namespaces/deployment-9044/pods/nginx-deployment-7b8c6f4498-z9tg2,UID:79ce27cb-1fa7-4012-a9f6-b9e9d6f932dd,ResourceVersion:18769955,Generation:0,CreationTimestamp:2019-12-31 13:24:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 802b701c-865c-4db8-9407-d79e78520323 0xc00280fd87 0xc00280fd88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7hc2c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7hc2c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-7hc2c true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00280fe00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00280fe20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:24:03 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2019-12-31 13:24:03 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-12-31 13:24:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://23a80267b3d1c55adb62fa0142b75244a99f4b8c1c3378c95735b73111fa0f12}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:24:46.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9044" for this suite. Dec 31 13:26:24.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:26:24.633: INFO: namespace deployment-9044 deletion completed in 1m35.040395055s • [SLOW TEST:141.007 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:26:24.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 31 13:26:47.070: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:47.083: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:49.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:49.096: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:51.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:51.095: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:53.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:53.091: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:55.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:55.090: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:57.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:57.090: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:26:59.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:26:59.170: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:01.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:01.088: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:03.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:03.089: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:05.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:05.099: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:07.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:07.092: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:09.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:09.090: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:11.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:11.090: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:13.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:13.093: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:15.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:15.096: INFO: Pod pod-with-prestop-exec-hook still exists Dec 31 13:27:17.083: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Dec 31 13:27:17.090: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:27:17.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1816" for this suite. Dec 31 13:27:39.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:27:39.243: INFO: namespace container-lifecycle-hook-1816 deletion completed in 22.112394756s • [SLOW TEST:74.609 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:27:39.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Dec 31 13:27:39.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7109' Dec 31 13:27:41.519: INFO: stderr: "" Dec 31 13:27:41.519: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 13:27:41.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:27:41.775: INFO: stderr: "" Dec 31 13:27:41.776: INFO: stdout: "update-demo-nautilus-7s845 update-demo-nautilus-sf6qc " Dec 31 13:27:41.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s845 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:42.022: INFO: stderr: "" Dec 31 13:27:42.022: INFO: stdout: "" Dec 31 13:27:42.022: INFO: update-demo-nautilus-7s845 is created but not running Dec 31 13:27:47.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:27:47.774: INFO: stderr: "" Dec 31 13:27:47.774: INFO: stdout: "update-demo-nautilus-7s845 update-demo-nautilus-sf6qc " Dec 31 13:27:47.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s845 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:48.293: INFO: stderr: "" Dec 31 13:27:48.293: INFO: stdout: "" Dec 31 13:27:48.293: INFO: update-demo-nautilus-7s845 is created but not running Dec 31 13:27:53.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:27:53.488: INFO: stderr: "" Dec 31 13:27:53.488: INFO: stdout: "update-demo-nautilus-7s845 update-demo-nautilus-sf6qc " Dec 31 13:27:53.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s845 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:53.614: INFO: stderr: "" Dec 31 13:27:53.614: INFO: stdout: "true" Dec 31 13:27:53.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7s845 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:53.716: INFO: stderr: "" Dec 31 13:27:53.716: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:27:53.716: INFO: validating pod update-demo-nautilus-7s845 Dec 31 13:27:53.758: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:27:53.758: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:27:53.758: INFO: update-demo-nautilus-7s845 is verified up and running Dec 31 13:27:53.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:53.899: INFO: stderr: "" Dec 31 13:27:53.899: INFO: stdout: "true" Dec 31 13:27:53.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:27:54.086: INFO: stderr: "" Dec 31 13:27:54.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:27:54.087: INFO: validating pod update-demo-nautilus-sf6qc Dec 31 13:27:54.099: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:27:54.099: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:27:54.099: INFO: update-demo-nautilus-sf6qc is verified up and running STEP: scaling down the replication controller Dec 31 13:27:54.103: INFO: scanned /root for discovery docs: Dec 31 13:27:54.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7109' Dec 31 13:27:55.259: INFO: stderr: "" Dec 31 13:27:55.259: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 13:27:55.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:27:55.423: INFO: stderr: "" Dec 31 13:27:55.423: INFO: stdout: "update-demo-nautilus-7s845 update-demo-nautilus-sf6qc " STEP: Replicas for name=update-demo: expected=1 actual=2 Dec 31 13:28:00.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:28:00.594: INFO: stderr: "" Dec 31 13:28:00.594: INFO: stdout: "update-demo-nautilus-sf6qc " Dec 31 13:28:00.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:00.714: INFO: stderr: "" Dec 31 13:28:00.714: INFO: stdout: "true" Dec 31 13:28:00.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:00.796: INFO: stderr: "" Dec 31 13:28:00.796: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:28:00.796: INFO: validating pod update-demo-nautilus-sf6qc Dec 31 13:28:00.800: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:28:00.800: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:28:00.800: INFO: update-demo-nautilus-sf6qc is verified up and running STEP: scaling up the replication controller Dec 31 13:28:00.801: INFO: scanned /root for discovery docs: Dec 31 13:28:00.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7109' Dec 31 13:28:01.942: INFO: stderr: "" Dec 31 13:28:01.943: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 31 13:28:01.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:28:02.241: INFO: stderr: "" Dec 31 13:28:02.241: INFO: stdout: "update-demo-nautilus-7sqzq update-demo-nautilus-sf6qc " Dec 31 13:28:02.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqzq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:02.507: INFO: stderr: "" Dec 31 13:28:02.507: INFO: stdout: "" Dec 31 13:28:02.507: INFO: update-demo-nautilus-7sqzq is created but not running Dec 31 13:28:07.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:28:07.704: INFO: stderr: "" Dec 31 13:28:07.704: INFO: stdout: "update-demo-nautilus-7sqzq update-demo-nautilus-sf6qc " Dec 31 13:28:07.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqzq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:07.866: INFO: stderr: "" Dec 31 13:28:07.866: INFO: stdout: "" Dec 31 13:28:07.866: INFO: update-demo-nautilus-7sqzq is created but not running Dec 31 13:28:12.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7109' Dec 31 13:28:12.990: INFO: stderr: "" Dec 31 13:28:12.991: INFO: stdout: "update-demo-nautilus-7sqzq update-demo-nautilus-sf6qc " Dec 31 13:28:12.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqzq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:13.134: INFO: stderr: "" Dec 31 13:28:13.134: INFO: stdout: "true" Dec 31 13:28:13.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7sqzq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:13.292: INFO: stderr: "" Dec 31 13:28:13.292: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:28:13.292: INFO: validating pod update-demo-nautilus-7sqzq Dec 31 13:28:13.307: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:28:13.307: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:28:13.307: INFO: update-demo-nautilus-7sqzq is verified up and running Dec 31 13:28:13.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:13.391: INFO: stderr: "" Dec 31 13:28:13.391: INFO: stdout: "true" Dec 31 13:28:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sf6qc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7109' Dec 31 13:28:13.519: INFO: stderr: "" Dec 31 13:28:13.519: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 31 13:28:13.519: INFO: validating pod update-demo-nautilus-sf6qc Dec 31 13:28:13.526: INFO: got data: { "image": "nautilus.jpg" } Dec 31 13:28:13.526: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 31 13:28:13.526: INFO: update-demo-nautilus-sf6qc is verified up and running STEP: using delete to clean up resources Dec 31 13:28:13.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7109' Dec 31 13:28:13.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 13:28:13.725: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 31 13:28:13.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7109' Dec 31 13:28:13.848: INFO: stderr: "No resources found.\n" Dec 31 13:28:13.848: INFO: stdout: "" Dec 31 13:28:13.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7109 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 31 13:28:14.008: INFO: stderr: "" Dec 31 13:28:14.008: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:28:14.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7109" for this suite. Dec 31 13:28:36.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:28:36.215: INFO: namespace kubectl-7109 deletion completed in 22.199078495s • [SLOW TEST:56.971 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:28:36.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Dec 31 13:28:44.564: INFO: Pod pod-hostip-e858ec62-844e-41d4-ac8d-0756bda7feb1 has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:28:44.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3749" for this suite. Dec 31 13:29:06.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:29:06.723: INFO: namespace pods-3749 deletion completed in 22.149969868s • [SLOW TEST:30.507 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:29:06.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9383.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 31 13:29:18.975: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:18.984: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:18.989: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:18.992: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:18.996: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:19.073: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:19.084: INFO: Unable to read jessie_udp@PodARecord from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:19.091: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea: the server could not find the requested resource (get pods dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea) Dec 31 13:29:19.091: INFO: Lookups using dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord] Dec 31 13:29:24.166: INFO: DNS probes using dns-9383/dns-test-266cd0f3-1a32-4044-90cf-c3f5bb82c1ea succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:29:24.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9383" for this suite. Dec 31 13:29:30.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:29:30.529: INFO: namespace dns-9383 deletion completed in 6.218370307s • [SLOW TEST:23.806 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:29:30.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 31 13:29:46.803: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:46.833: INFO: Pod pod-with-poststart-http-hook still exists Dec 31 13:29:48.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:48.849: INFO: Pod pod-with-poststart-http-hook still exists Dec 31 13:29:50.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:50.852: INFO: Pod pod-with-poststart-http-hook still exists Dec 31 13:29:52.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:52.856: INFO: Pod pod-with-poststart-http-hook still exists Dec 31 13:29:54.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:54.850: INFO: Pod pod-with-poststart-http-hook still exists Dec 31 13:29:56.834: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 31 13:29:56.850: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:29:56.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7498" for this suite. Dec 31 13:30:18.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:30:19.044: INFO: namespace container-lifecycle-hook-7498 deletion completed in 22.184586105s • [SLOW TEST:48.514 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:30:19.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:30:19.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3" in namespace "downward-api-7725" to be "success or failure" Dec 31 13:30:19.189: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.866257ms Dec 31 13:30:21.198: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016177491s Dec 31 13:30:23.205: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023372336s Dec 31 13:30:25.211: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029574281s Dec 31 13:30:27.220: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038612302s STEP: Saw pod success Dec 31 13:30:27.220: INFO: Pod "downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3" satisfied condition "success or failure" Dec 31 13:30:27.228: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3 container client-container: STEP: delete the pod Dec 31 13:30:27.323: INFO: Waiting for pod downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3 to disappear Dec 31 13:30:27.371: INFO: Pod downwardapi-volume-2db9a3c7-0da8-45f2-a56a-a6f4962119e3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:30:27.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7725" for this suite. Dec 31 13:30:33.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:30:33.625: INFO: namespace downward-api-7725 deletion completed in 6.247440529s • [SLOW TEST:14.581 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:30:33.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2265 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2265 STEP: Creating statefulset with conflicting port in namespace statefulset-2265 STEP: Waiting until pod test-pod will start running in namespace statefulset-2265 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2265 Dec 31 13:30:45.914: INFO: Observed stateful pod in namespace: statefulset-2265, name: ss-0, uid: b3741115-2ff1-41d8-834a-de9bc415523f, status phase: Pending. Waiting for statefulset controller to delete. Dec 31 13:30:46.494: INFO: Observed stateful pod in namespace: statefulset-2265, name: ss-0, uid: b3741115-2ff1-41d8-834a-de9bc415523f, status phase: Failed. Waiting for statefulset controller to delete. Dec 31 13:30:46.515: INFO: Observed stateful pod in namespace: statefulset-2265, name: ss-0, uid: b3741115-2ff1-41d8-834a-de9bc415523f, status phase: Failed. Waiting for statefulset controller to delete. Dec 31 13:30:46.543: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2265 STEP: Removing pod with conflicting port in namespace statefulset-2265 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2265 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Dec 31 13:30:56.867: INFO: Deleting all statefulset in ns statefulset-2265 Dec 31 13:30:56.881: INFO: Scaling statefulset ss to 0 Dec 31 13:31:06.929: INFO: Waiting for statefulset status.replicas updated to 0 Dec 31 13:31:06.934: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:31:06.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2265" for this suite. Dec 31 13:31:13.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:31:13.103: INFO: namespace statefulset-2265 deletion completed in 6.139476007s • [SLOW TEST:39.477 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:31:13.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Dec 31 13:31:13.179: INFO: Waiting up to 5m0s for pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47" in namespace "var-expansion-4274" to be "success or failure" Dec 31 13:31:13.196: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47": Phase="Pending", Reason="", readiness=false. Elapsed: 16.525308ms Dec 31 13:31:15.206: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026589022s Dec 31 13:31:17.212: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032153401s Dec 31 13:31:19.218: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038933665s Dec 31 13:31:21.228: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048917854s STEP: Saw pod success Dec 31 13:31:21.228: INFO: Pod "var-expansion-de549b54-1bde-4294-8c29-4f252389ee47" satisfied condition "success or failure" Dec 31 13:31:21.232: INFO: Trying to get logs from node iruya-node pod var-expansion-de549b54-1bde-4294-8c29-4f252389ee47 container dapi-container: STEP: delete the pod Dec 31 13:31:21.279: INFO: Waiting for pod var-expansion-de549b54-1bde-4294-8c29-4f252389ee47 to disappear Dec 31 13:31:21.326: INFO: Pod var-expansion-de549b54-1bde-4294-8c29-4f252389ee47 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:31:21.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4274" for this suite. Dec 31 13:31:27.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:31:27.554: INFO: namespace var-expansion-4274 deletion completed in 6.217810684s • [SLOW TEST:14.451 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:31:27.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:32:27.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1180" for this suite. Dec 31 13:32:49.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:32:49.969: INFO: namespace container-probe-1180 deletion completed in 22.248705567s • [SLOW TEST:82.415 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:32:49.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:32:50.272: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"80c5f593-c5a5-409c-914f-bbe4c1673629", Controller:(*bool)(0xc002a6c8fa), BlockOwnerDeletion:(*bool)(0xc002a6c8fb)}} Dec 31 13:32:50.348: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bf3d3c41-c5c8-4b91-8fcd-ccd2a7a38e33", Controller:(*bool)(0xc002659082), BlockOwnerDeletion:(*bool)(0xc002659083)}} Dec 31 13:32:50.362: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5c00764a-e53b-4c4d-84f9-b191d1454fd4", Controller:(*bool)(0xc002a6cb02), BlockOwnerDeletion:(*bool)(0xc002a6cb03)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:32:55.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3358" for this suite. Dec 31 13:33:01.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:33:01.634: INFO: namespace gc-3358 deletion completed in 6.214746746s • [SLOW TEST:11.664 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:33:01.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:33:09.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9133" for this suite. Dec 31 13:33:15.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:33:15.954: INFO: namespace kubelet-test-9133 deletion completed in 6.191168487s • [SLOW TEST:14.319 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:33:15.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Dec 31 13:33:16.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7549' Dec 31 13:33:16.513: INFO: stderr: "" Dec 31 13:33:16.513: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Dec 31 13:33:17.523: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:17.523: INFO: Found 0 / 1 Dec 31 13:33:18.530: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:18.530: INFO: Found 0 / 1 Dec 31 13:33:19.524: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:19.524: INFO: Found 0 / 1 Dec 31 13:33:20.529: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:20.529: INFO: Found 0 / 1 Dec 31 13:33:21.535: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:21.535: INFO: Found 0 / 1 Dec 31 13:33:22.525: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:22.525: INFO: Found 0 / 1 Dec 31 13:33:23.522: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:23.522: INFO: Found 1 / 1 Dec 31 13:33:23.522: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 31 13:33:23.531: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:33:23.531: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Dec 31 13:33:23.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549' Dec 31 13:33:23.778: INFO: stderr: "" Dec 31 13:33:23.778: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 31 Dec 13:33:23.110 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 13:33:23.110 # Server started, Redis version 3.2.12\n1:M 31 Dec 13:33:23.110 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 13:33:23.110 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Dec 31 13:33:23.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549 --tail=1' Dec 31 13:33:23.952: INFO: stderr: "" Dec 31 13:33:23.952: INFO: stdout: "1:M 31 Dec 13:33:23.110 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Dec 31 13:33:23.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549 --limit-bytes=1' Dec 31 13:33:24.159: INFO: stderr: "" Dec 31 13:33:24.159: INFO: stdout: " " STEP: exposing timestamps Dec 31 13:33:24.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549 --tail=1 --timestamps' Dec 31 13:33:24.278: INFO: stderr: "" Dec 31 13:33:24.278: INFO: stdout: "2019-12-31T13:33:23.111694628Z 1:M 31 Dec 13:33:23.110 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Dec 31 13:33:26.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549 --since=1s' Dec 31 13:33:27.190: INFO: stderr: "" Dec 31 13:33:27.190: INFO: stdout: "" Dec 31 13:33:27.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-5xtgv redis-master --namespace=kubectl-7549 --since=24h' Dec 31 13:33:27.380: INFO: stderr: "" Dec 31 13:33:27.381: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 31 Dec 13:33:23.110 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 13:33:23.110 # Server started, Redis version 3.2.12\n1:M 31 Dec 13:33:23.110 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 13:33:23.110 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Dec 31 13:33:27.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7549' Dec 31 13:33:27.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 31 13:33:27.486: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Dec 31 13:33:27.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7549' Dec 31 13:33:27.564: INFO: stderr: "No resources found.\n" Dec 31 13:33:27.564: INFO: stdout: "" Dec 31 13:33:27.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7549 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 31 13:33:27.646: INFO: stderr: "" Dec 31 13:33:27.646: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:33:27.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7549" for this suite. Dec 31 13:33:49.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:33:49.846: INFO: namespace kubectl-7549 deletion completed in 22.196454095s • [SLOW TEST:33.892 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:33:49.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:33:50.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c" in namespace "projected-2521" to be "success or failure" Dec 31 13:33:50.043: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.997212ms Dec 31 13:33:52.051: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011620966s Dec 31 13:33:54.058: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018962784s Dec 31 13:33:56.067: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027878672s Dec 31 13:33:58.075: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036172713s Dec 31 13:34:00.080: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.040614249s STEP: Saw pod success Dec 31 13:34:00.080: INFO: Pod "downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c" satisfied condition "success or failure" Dec 31 13:34:00.082: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c container client-container: STEP: delete the pod Dec 31 13:34:00.246: INFO: Waiting for pod downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c to disappear Dec 31 13:34:00.251: INFO: Pod downwardapi-volume-bec0d196-8167-42be-9af1-c62aa09aad4c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:34:00.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2521" for this suite. Dec 31 13:34:06.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:34:06.428: INFO: namespace projected-2521 deletion completed in 6.171170367s • [SLOW TEST:16.581 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:34:06.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 31 13:34:06.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9820' Dec 31 13:34:06.752: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 31 13:34:06.752: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Dec 31 13:34:06.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9820' Dec 31 13:34:06.962: INFO: stderr: "" Dec 31 13:34:06.962: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:34:06.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9820" for this suite. Dec 31 13:34:29.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:34:29.201: INFO: namespace kubectl-9820 deletion completed in 22.231960934s • [SLOW TEST:22.772 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:34:29.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-073514d1-bf1d-40b8-a95f-c1e6134fd616 STEP: Creating a pod to test consume configMaps Dec 31 13:34:29.326: INFO: Waiting up to 5m0s for pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764" in namespace "configmap-1128" to be "success or failure" Dec 31 13:34:29.342: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Pending", Reason="", readiness=false. Elapsed: 15.593153ms Dec 31 13:34:31.353: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026226812s Dec 31 13:34:33.361: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03430841s Dec 31 13:34:35.371: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045054848s Dec 31 13:34:37.381: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054461683s Dec 31 13:34:39.389: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062623591s STEP: Saw pod success Dec 31 13:34:39.389: INFO: Pod "pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764" satisfied condition "success or failure" Dec 31 13:34:39.393: INFO: Trying to get logs from node iruya-node pod pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764 container configmap-volume-test: STEP: delete the pod Dec 31 13:34:39.593: INFO: Waiting for pod pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764 to disappear Dec 31 13:34:39.597: INFO: Pod pod-configmaps-76c07dec-7d94-44b4-b9e2-c74e5eadc764 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:34:39.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1128" for this suite. Dec 31 13:34:45.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:34:45.791: INFO: namespace configmap-1128 deletion completed in 6.189142029s • [SLOW TEST:16.590 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:34:45.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:34:45.950: INFO: Waiting up to 5m0s for pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620" in namespace "downward-api-7995" to be "success or failure" Dec 31 13:34:45.958: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620": Phase="Pending", Reason="", readiness=false. Elapsed: 7.399023ms Dec 31 13:34:47.978: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02762117s Dec 31 13:34:49.988: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037229625s Dec 31 13:34:51.998: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047927026s Dec 31 13:34:54.026: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075116438s STEP: Saw pod success Dec 31 13:34:54.026: INFO: Pod "downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620" satisfied condition "success or failure" Dec 31 13:34:54.038: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620 container client-container: STEP: delete the pod Dec 31 13:34:54.723: INFO: Waiting for pod downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620 to disappear Dec 31 13:34:54.733: INFO: Pod downwardapi-volume-579b81be-a76b-41d6-b022-5e4fdb8ec620 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:34:54.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7995" for this suite. Dec 31 13:35:00.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:35:00.982: INFO: namespace downward-api-7995 deletion completed in 6.240654614s • [SLOW TEST:15.190 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:35:00.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Dec 31 13:35:01.184: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:35:15.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9861" for this suite. Dec 31 13:35:21.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:35:21.592: INFO: namespace init-container-9861 deletion completed in 6.193040017s • [SLOW TEST:20.610 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:35:21.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Dec 31 13:35:21.696: INFO: Waiting up to 5m0s for pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c" in namespace "emptydir-5620" to be "success or failure" Dec 31 13:35:21.710: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.30943ms Dec 31 13:35:23.735: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039385913s Dec 31 13:35:25.743: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046947279s Dec 31 13:35:27.761: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064974901s Dec 31 13:35:29.769: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072798168s STEP: Saw pod success Dec 31 13:35:29.769: INFO: Pod "pod-975e31f7-1bc0-4862-a9e5-119b422ea94c" satisfied condition "success or failure" Dec 31 13:35:29.771: INFO: Trying to get logs from node iruya-node pod pod-975e31f7-1bc0-4862-a9e5-119b422ea94c container test-container: STEP: delete the pod Dec 31 13:35:29.820: INFO: Waiting for pod pod-975e31f7-1bc0-4862-a9e5-119b422ea94c to disappear Dec 31 13:35:29.858: INFO: Pod pod-975e31f7-1bc0-4862-a9e5-119b422ea94c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:35:29.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5620" for this suite. Dec 31 13:35:35.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:35:36.039: INFO: namespace emptydir-5620 deletion completed in 6.172846988s • [SLOW TEST:14.447 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:35:36.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:35:36.109: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:35:37.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5162" for this suite. Dec 31 13:35:43.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:35:43.391: INFO: namespace custom-resource-definition-5162 deletion completed in 6.186402353s • [SLOW TEST:7.352 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:35:43.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:35:43.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05" in namespace "downward-api-3912" to be "success or failure" Dec 31 13:35:43.546: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Pending", Reason="", readiness=false. Elapsed: 14.421419ms Dec 31 13:35:45.554: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021812204s Dec 31 13:35:47.564: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032429209s Dec 31 13:35:49.579: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047349847s Dec 31 13:35:51.587: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055259366s Dec 31 13:35:53.600: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068138823s STEP: Saw pod success Dec 31 13:35:53.600: INFO: Pod "downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05" satisfied condition "success or failure" Dec 31 13:35:53.606: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05 container client-container: STEP: delete the pod Dec 31 13:35:53.691: INFO: Waiting for pod downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05 to disappear Dec 31 13:35:53.699: INFO: Pod downwardapi-volume-d3284156-554a-4823-9dae-a10764406d05 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:35:53.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3912" for this suite. Dec 31 13:35:59.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:35:59.814: INFO: namespace downward-api-3912 deletion completed in 6.107381826s • [SLOW TEST:16.422 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:35:59.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:36:30.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4943" for this suite. Dec 31 13:36:36.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:36:36.540: INFO: namespace namespaces-4943 deletion completed in 6.258934945s STEP: Destroying namespace "nsdeletetest-9063" for this suite. Dec 31 13:36:36.543: INFO: Namespace nsdeletetest-9063 was already deleted STEP: Destroying namespace "nsdeletetest-9146" for this suite. Dec 31 13:36:42.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:36:42.698: INFO: namespace nsdeletetest-9146 deletion completed in 6.15437282s • [SLOW TEST:42.884 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:36:42.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:36:52.894: INFO: Waiting up to 5m0s for pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0" in namespace "pods-8258" to be "success or failure" Dec 31 13:36:52.971: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0": Phase="Pending", Reason="", readiness=false. Elapsed: 76.869523ms Dec 31 13:36:54.978: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083610744s Dec 31 13:36:56.985: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090650239s Dec 31 13:36:59.135: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.240189115s Dec 31 13:37:01.162: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.267480061s STEP: Saw pod success Dec 31 13:37:01.162: INFO: Pod "client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0" satisfied condition "success or failure" Dec 31 13:37:01.172: INFO: Trying to get logs from node iruya-node pod client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0 container env3cont: STEP: delete the pod Dec 31 13:37:01.237: INFO: Waiting for pod client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0 to disappear Dec 31 13:37:01.245: INFO: Pod client-envvars-b3a8f3d4-a999-4fe2-a6a6-b45e7f5499e0 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:37:01.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8258" for this suite. Dec 31 13:37:47.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:37:47.547: INFO: namespace pods-8258 deletion completed in 46.2941612s • [SLOW TEST:64.849 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:37:47.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Dec 31 13:37:48.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19" in namespace "downward-api-6147" to be "success or failure" Dec 31 13:37:48.256: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Pending", Reason="", readiness=false. Elapsed: 108.815837ms Dec 31 13:37:50.267: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119478795s Dec 31 13:37:52.280: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132052767s Dec 31 13:37:54.294: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146463979s Dec 31 13:37:56.307: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Pending", Reason="", readiness=false. Elapsed: 8.159754631s Dec 31 13:37:58.317: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169611499s STEP: Saw pod success Dec 31 13:37:58.317: INFO: Pod "downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19" satisfied condition "success or failure" Dec 31 13:37:58.323: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19 container client-container: STEP: delete the pod Dec 31 13:37:58.427: INFO: Waiting for pod downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19 to disappear Dec 31 13:37:58.504: INFO: Pod downwardapi-volume-b8e6bbb0-9548-46c9-9dca-37d238f7ee19 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:37:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6147" for this suite. Dec 31 13:38:04.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:38:04.649: INFO: namespace downward-api-6147 deletion completed in 6.136715942s • [SLOW TEST:17.102 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:38:04.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Dec 31 13:38:04.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9977' Dec 31 13:38:07.119: INFO: stderr: "" Dec 31 13:38:07.119: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Dec 31 13:38:07.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9977' Dec 31 13:38:12.090: INFO: stderr: "" Dec 31 13:38:12.090: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:38:12.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9977" for this suite. Dec 31 13:38:18.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:38:18.229: INFO: namespace kubectl-9977 deletion completed in 6.13549818s • [SLOW TEST:13.580 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:38:18.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-fdcb692d-dd68-426b-96da-6d3412ce8244 STEP: Creating a pod to test consume secrets Dec 31 13:38:18.297: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26" in namespace "projected-3460" to be "success or failure" Dec 31 13:38:18.301: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.983725ms Dec 31 13:38:20.317: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020153954s Dec 31 13:38:22.328: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031258437s Dec 31 13:38:24.409: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11228555s Dec 31 13:38:26.418: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12094121s Dec 31 13:38:28.426: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.128790904s STEP: Saw pod success Dec 31 13:38:28.426: INFO: Pod "pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26" satisfied condition "success or failure" Dec 31 13:38:28.430: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26 container projected-secret-volume-test: STEP: delete the pod Dec 31 13:38:28.515: INFO: Waiting for pod pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26 to disappear Dec 31 13:38:28.523: INFO: Pod pod-projected-secrets-e4263819-e216-4e28-9a0e-3be5c9c2ba26 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:38:28.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3460" for this suite. Dec 31 13:38:34.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:38:34.695: INFO: namespace projected-3460 deletion completed in 6.165105995s • [SLOW TEST:16.465 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:38:34.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Dec 31 13:38:34.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2921' Dec 31 13:38:35.359: INFO: stderr: "" Dec 31 13:38:35.359: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Dec 31 13:38:36.371: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:36.371: INFO: Found 0 / 1 Dec 31 13:38:37.409: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:37.409: INFO: Found 0 / 1 Dec 31 13:38:38.378: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:38.379: INFO: Found 0 / 1 Dec 31 13:38:39.370: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:39.370: INFO: Found 0 / 1 Dec 31 13:38:40.374: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:40.374: INFO: Found 0 / 1 Dec 31 13:38:41.373: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:41.373: INFO: Found 0 / 1 Dec 31 13:38:42.374: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:42.374: INFO: Found 0 / 1 Dec 31 13:38:43.372: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:43.372: INFO: Found 0 / 1 Dec 31 13:38:44.402: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:44.402: INFO: Found 1 / 1 Dec 31 13:38:44.402: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Dec 31 13:38:44.408: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:44.408: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 31 13:38:44.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-ksnd2 --namespace=kubectl-2921 -p {"metadata":{"annotations":{"x":"y"}}}' Dec 31 13:38:44.637: INFO: stderr: "" Dec 31 13:38:44.637: INFO: stdout: "pod/redis-master-ksnd2 patched\n" STEP: checking annotations Dec 31 13:38:44.656: INFO: Selector matched 1 pods for map[app:redis] Dec 31 13:38:44.656: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:38:44.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2921" for this suite. Dec 31 13:39:08.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:39:08.811: INFO: namespace kubectl-2921 deletion completed in 24.150397662s • [SLOW TEST:34.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:39:08.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Dec 31 13:39:08.966: INFO: Waiting up to 5m0s for pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e" in namespace "downward-api-3031" to be "success or failure" Dec 31 13:39:09.145: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e": Phase="Pending", Reason="", readiness=false. Elapsed: 178.473078ms Dec 31 13:39:11.161: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194563306s Dec 31 13:39:13.168: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201258796s Dec 31 13:39:15.176: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209820947s Dec 31 13:39:17.184: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.217125199s STEP: Saw pod success Dec 31 13:39:17.184: INFO: Pod "downward-api-33319734-ae40-4b4c-8e91-f3384198297e" satisfied condition "success or failure" Dec 31 13:39:17.187: INFO: Trying to get logs from node iruya-node pod downward-api-33319734-ae40-4b4c-8e91-f3384198297e container dapi-container: STEP: delete the pod Dec 31 13:39:17.276: INFO: Waiting for pod downward-api-33319734-ae40-4b4c-8e91-f3384198297e to disappear Dec 31 13:39:17.285: INFO: Pod downward-api-33319734-ae40-4b4c-8e91-f3384198297e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Dec 31 13:39:17.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3031" for this suite. Dec 31 13:39:23.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 31 13:39:23.440: INFO: namespace downward-api-3031 deletion completed in 6.150620047s • [SLOW TEST:14.629 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Dec 31 13:39:23.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Dec 31 13:39:23.591: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.331218ms)
Dec 31 13:39:23.599: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.116674ms)
Dec 31 13:39:23.605: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.63504ms)
Dec 31 13:39:23.610: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.501336ms)
Dec 31 13:39:23.615: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.288041ms)
Dec 31 13:39:23.620: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.235709ms)
Dec 31 13:39:23.625: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.234884ms)
Dec 31 13:39:23.630: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.122696ms)
Dec 31 13:39:23.635: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.27351ms)
Dec 31 13:39:23.640: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.609631ms)
Dec 31 13:39:23.644: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.444425ms)
Dec 31 13:39:23.650: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.197621ms)
Dec 31 13:39:23.670: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.05948ms)
Dec 31 13:39:23.716: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 45.78218ms)
Dec 31 13:39:23.721: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.244091ms)
Dec 31 13:39:23.726: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.869325ms)
Dec 31 13:39:23.731: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.91662ms)
Dec 31 13:39:23.739: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.831364ms)
Dec 31 13:39:23.745: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.247539ms)
Dec 31 13:39:23.754: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.606847ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:39:23.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-858" for this suite.
Dec 31 13:39:29.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:39:29.989: INFO: namespace proxy-858 deletion completed in 6.228466043s

• [SLOW TEST:6.548 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:39:29.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d57f3027-fbe6-454e-a00a-88081d26fe61
STEP: Creating a pod to test consume secrets
Dec 31 13:39:30.120: INFO: Waiting up to 5m0s for pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34" in namespace "secrets-3354" to be "success or failure"
Dec 31 13:39:30.135: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34": Phase="Pending", Reason="", readiness=false. Elapsed: 15.061223ms
Dec 31 13:39:32.148: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027898523s
Dec 31 13:39:34.164: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043869494s
Dec 31 13:39:36.174: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054021292s
Dec 31 13:39:38.230: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.109979972s
STEP: Saw pod success
Dec 31 13:39:38.230: INFO: Pod "pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34" satisfied condition "success or failure"
Dec 31 13:39:38.236: INFO: Trying to get logs from node iruya-node pod pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34 container secret-volume-test: 
STEP: delete the pod
Dec 31 13:39:38.378: INFO: Waiting for pod pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34 to disappear
Dec 31 13:39:38.391: INFO: Pod pod-secrets-c7bd4b2e-06fb-4584-be9d-2c19da576a34 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:39:38.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3354" for this suite.
Dec 31 13:39:44.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:39:44.648: INFO: namespace secrets-3354 deletion completed in 6.246308879s

• [SLOW TEST:14.658 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:39:44.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3264
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 13:39:44.717: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 13:40:22.952: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3264 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:40:22.952: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:40:24.350: INFO: Found all expected endpoints: [netserver-0]
Dec 31 13:40:24.362: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3264 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:40:24.362: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:40:25.967: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:40:25.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3264" for this suite.
Dec 31 13:40:52.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:40:52.133: INFO: namespace pod-network-test-3264 deletion completed in 26.153191711s

• [SLOW TEST:67.486 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:40:52.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Dec 31 13:40:52.238: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:40:52.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6177" for this suite.
Dec 31 13:40:58.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:40:58.540: INFO: namespace kubectl-6177 deletion completed in 6.181098837s

• [SLOW TEST:6.407 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:40:58.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 13:40:58.687: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.106379ms)
Dec 31 13:40:58.694: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.186561ms)
Dec 31 13:40:58.699: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.024802ms)
Dec 31 13:40:58.703: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.456928ms)
Dec 31 13:40:58.710: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.633413ms)
Dec 31 13:40:58.721: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.825426ms)
Dec 31 13:40:58.728: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.551046ms)
Dec 31 13:40:58.732: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.554126ms)
Dec 31 13:40:58.737: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.737233ms)
Dec 31 13:40:58.741: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.228319ms)
Dec 31 13:40:58.747: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.107099ms)
Dec 31 13:40:58.760: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.612989ms)
Dec 31 13:40:58.779: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.571566ms)
Dec 31 13:40:58.791: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.355062ms)
Dec 31 13:40:58.797: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.699661ms)
Dec 31 13:40:58.802: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.476423ms)
Dec 31 13:40:58.806: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.763541ms)
Dec 31 13:40:58.810: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.304575ms)
Dec 31 13:40:58.814: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.104948ms)
Dec 31 13:40:58.817: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.718852ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:40:58.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7621" for this suite.
Dec 31 13:41:04.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:41:04.931: INFO: namespace proxy-7621 deletion completed in 6.111554332s

• [SLOW TEST:6.390 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:41:04.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 13:41:05.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5576'
Dec 31 13:41:05.203: INFO: stderr: ""
Dec 31 13:41:05.203: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Dec 31 13:41:15.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5576 -o json'
Dec 31 13:41:15.433: INFO: stderr: ""
Dec 31 13:41:15.433: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-31T13:41:05Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-5576\",\n        \"resourceVersion\": \"18772730\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-5576/pods/e2e-test-nginx-pod\",\n        \"uid\": \"05c4f64b-3752-43d8-b0e5-6ac23a40f915\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-d8rrh\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-d8rrh\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-d8rrh\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T13:41:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T13:41:13Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T13:41:13Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-31T13:41:05Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://67da622d2ff4807c78c2fe2f707c91f485b4c78a72fa4f4aa8a0f1249a39d682\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-31T13:41:12Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-31T13:41:05Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 31 13:41:15.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5576'
Dec 31 13:41:16.017: INFO: stderr: ""
Dec 31 13:41:16.017: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Dec 31 13:41:16.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5576'
Dec 31 13:41:22.512: INFO: stderr: ""
Dec 31 13:41:22.513: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:41:22.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5576" for this suite.
Dec 31 13:41:28.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:41:28.671: INFO: namespace kubectl-5576 deletion completed in 6.148421729s

• [SLOW TEST:23.739 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:41:28.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 13:41:28.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3998'
Dec 31 13:41:29.009: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 13:41:29.010: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Dec 31 13:41:29.025: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 31 13:41:29.072: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 31 13:41:29.157: INFO: scanned /root for discovery docs: 
Dec 31 13:41:29.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3998'
Dec 31 13:41:52.184: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 31 13:41:52.184: INFO: stdout: "Created e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669\nScaling up e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Dec 31 13:41:52.184: INFO: stdout: "Created e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669\nScaling up e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Dec 31 13:41:52.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3998'
Dec 31 13:41:52.295: INFO: stderr: ""
Dec 31 13:41:52.295: INFO: stdout: "e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669-qxnvm e2e-test-nginx-rc-jvw8w "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Dec 31 13:41:57.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3998'
Dec 31 13:41:57.452: INFO: stderr: ""
Dec 31 13:41:57.452: INFO: stdout: "e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669-qxnvm "
Dec 31 13:41:57.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669-qxnvm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3998'
Dec 31 13:41:57.574: INFO: stderr: ""
Dec 31 13:41:57.574: INFO: stdout: "true"
Dec 31 13:41:57.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669-qxnvm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3998'
Dec 31 13:41:57.681: INFO: stderr: ""
Dec 31 13:41:57.681: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Dec 31 13:41:57.681: INFO: e2e-test-nginx-rc-dc8dc736beef570dbb0eff08e1c26669-qxnvm is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Dec 31 13:41:57.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3998'
Dec 31 13:41:57.813: INFO: stderr: ""
Dec 31 13:41:57.813: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:41:57.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3998" for this suite.
Dec 31 13:42:03.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:42:04.037: INFO: namespace kubectl-3998 deletion completed in 6.2093697s

• [SLOW TEST:35.366 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:42:04.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-20e2c1e7-6879-4200-939b-a1006a5d3154
STEP: Creating a pod to test consume secrets
Dec 31 13:42:04.174: INFO: Waiting up to 5m0s for pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc" in namespace "secrets-5983" to be "success or failure"
Dec 31 13:42:04.211: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.152581ms
Dec 31 13:42:06.223: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048370088s
Dec 31 13:42:08.230: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055268015s
Dec 31 13:42:10.243: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068766738s
Dec 31 13:42:12.250: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075187638s
STEP: Saw pod success
Dec 31 13:42:12.250: INFO: Pod "pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc" satisfied condition "success or failure"
Dec 31 13:42:12.254: INFO: Trying to get logs from node iruya-node pod pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc container secret-volume-test: 
STEP: delete the pod
Dec 31 13:42:12.318: INFO: Waiting for pod pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc to disappear
Dec 31 13:42:12.328: INFO: Pod pod-secrets-64121139-aaa7-4a96-a11a-d262398d37bc no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:42:12.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5983" for this suite.
Dec 31 13:42:18.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:42:18.562: INFO: namespace secrets-5983 deletion completed in 6.200088988s

• [SLOW TEST:14.524 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:42:18.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1231 13:42:28.795770       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 13:42:28.795: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:42:28.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8557" for this suite.
Dec 31 13:42:34.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:42:35.044: INFO: namespace gc-8557 deletion completed in 6.242910687s

• [SLOW TEST:16.481 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:42:35.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 13:42:35.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154" in namespace "projected-4027" to be "success or failure"
Dec 31 13:42:35.210: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Pending", Reason="", readiness=false. Elapsed: 11.109269ms
Dec 31 13:42:37.217: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018487531s
Dec 31 13:42:39.224: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025726363s
Dec 31 13:42:41.235: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036216545s
Dec 31 13:42:43.242: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043412903s
Dec 31 13:42:45.249: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.050584604s
STEP: Saw pod success
Dec 31 13:42:45.249: INFO: Pod "downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154" satisfied condition "success or failure"
Dec 31 13:42:45.254: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154 container client-container: 
STEP: delete the pod
Dec 31 13:42:45.356: INFO: Waiting for pod downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154 to disappear
Dec 31 13:42:45.377: INFO: Pod downwardapi-volume-46d31ffe-62d3-4bed-9369-48b7cddb3154 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:42:45.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4027" for this suite.
Dec 31 13:42:51.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:42:51.652: INFO: namespace projected-4027 deletion completed in 6.26852048s

• [SLOW TEST:16.608 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:42:51.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 13:42:51.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7604'
Dec 31 13:42:52.087: INFO: stderr: ""
Dec 31 13:42:52.087: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 31 13:42:52.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7604'
Dec 31 13:42:52.819: INFO: stderr: ""
Dec 31 13:42:52.819: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 31 13:42:53.832: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:53.832: INFO: Found 0 / 1
Dec 31 13:42:54.841: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:54.842: INFO: Found 0 / 1
Dec 31 13:42:55.836: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:55.836: INFO: Found 0 / 1
Dec 31 13:42:56.833: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:56.834: INFO: Found 0 / 1
Dec 31 13:42:57.836: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:57.836: INFO: Found 0 / 1
Dec 31 13:42:58.827: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:58.827: INFO: Found 0 / 1
Dec 31 13:42:59.835: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:59.835: INFO: Found 1 / 1
Dec 31 13:42:59.835: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 31 13:42:59.842: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 13:42:59.842: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 31 13:42:59.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-sj9pj --namespace=kubectl-7604'
Dec 31 13:43:00.045: INFO: stderr: ""
Dec 31 13:43:00.045: INFO: stdout: "Name:           redis-master-sj9pj\nNamespace:      kubectl-7604\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Tue, 31 Dec 2019 13:42:52 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://439f2a54bddca8fe35fb693b669978c14d2e9949333e919ccdbb8bcb9cd8c827\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 31 Dec 2019 13:42:58 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q4whn (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-q4whn:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-q4whn\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  8s    default-scheduler    Successfully assigned kubectl-7604/redis-master-sj9pj to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Dec 31 13:43:00.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-7604'
Dec 31 13:43:00.225: INFO: stderr: ""
Dec 31 13:43:00.225: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-7604\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: redis-master-sj9pj\n"
Dec 31 13:43:00.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-7604'
Dec 31 13:43:00.343: INFO: stderr: ""
Dec 31 13:43:00.343: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-7604\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.104.104.218\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 31 13:43:00.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Dec 31 13:43:00.565: INFO: stderr: ""
Dec 31 13:43:00.565: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 31 Dec 2019 13:42:02 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 31 Dec 2019 13:42:02 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 31 Dec 2019 13:42:02 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 31 Dec 2019 13:42:02 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         149d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         80d\n  kubectl-7604               redis-master-sj9pj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 31 13:43:00.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-7604'
Dec 31 13:43:00.675: INFO: stderr: ""
Dec 31 13:43:00.675: INFO: stdout: "Name:         kubectl-7604\nLabels:       e2e-framework=kubectl\n              e2e-run=1896ca61-1857-4545-b9a3-6049b0001f72\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:43:00.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7604" for this suite.
Dec 31 13:43:22.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:43:22.878: INFO: namespace kubectl-7604 deletion completed in 22.195891611s

• [SLOW TEST:31.225 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:43:22.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 31 13:43:23.013: INFO: Waiting up to 5m0s for pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e" in namespace "emptydir-5339" to be "success or failure"
Dec 31 13:43:23.033: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.378661ms
Dec 31 13:43:25.052: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038663391s
Dec 31 13:43:27.106: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093323554s
Dec 31 13:43:29.118: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104803377s
Dec 31 13:43:31.128: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115433833s
STEP: Saw pod success
Dec 31 13:43:31.129: INFO: Pod "pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e" satisfied condition "success or failure"
Dec 31 13:43:31.132: INFO: Trying to get logs from node iruya-node pod pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e container test-container: 
STEP: delete the pod
Dec 31 13:43:31.227: INFO: Waiting for pod pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e to disappear
Dec 31 13:43:31.339: INFO: Pod pod-6afac9ae-fe3e-4986-8e89-c3b2914b776e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:43:31.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5339" for this suite.
Dec 31 13:43:37.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:43:37.556: INFO: namespace emptydir-5339 deletion completed in 6.208512168s

• [SLOW TEST:14.678 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:43:37.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Dec 31 13:43:37.624: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Dec 31 13:43:38.816: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 31 13:43:41.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396619, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:43:43.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396619, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:43:45.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396619, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:43:47.111: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396619, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:43:49.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396619, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396618, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:43:55.391: INFO: Waited 4.266635901s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:43:55.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-92" for this suite.
Dec 31 13:44:01.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:44:02.043: INFO: namespace aggregator-92 deletion completed in 6.232617403s

• [SLOW TEST:24.486 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:44:02.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8833
I1231 13:44:02.105077       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8833, replica count: 1
I1231 13:44:03.155988       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:04.156295       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:05.156664       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:06.157199       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:07.157574       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:08.157928       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:09.158442       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 13:44:10.158834       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 31 13:44:10.321: INFO: Created: latency-svc-p7bvz
Dec 31 13:44:10.336: INFO: Got endpoints: latency-svc-p7bvz [77.329641ms]
Dec 31 13:44:10.462: INFO: Created: latency-svc-25cql
Dec 31 13:44:10.462: INFO: Got endpoints: latency-svc-25cql [125.016505ms]
Dec 31 13:44:10.572: INFO: Created: latency-svc-8ccmn
Dec 31 13:44:10.604: INFO: Got endpoints: latency-svc-8ccmn [265.52041ms]
Dec 31 13:44:10.776: INFO: Created: latency-svc-vgj4q
Dec 31 13:44:10.783: INFO: Got endpoints: latency-svc-vgj4q [444.826281ms]
Dec 31 13:44:10.825: INFO: Created: latency-svc-pvzm5
Dec 31 13:44:10.834: INFO: Got endpoints: latency-svc-pvzm5 [496.049997ms]
Dec 31 13:44:10.927: INFO: Created: latency-svc-kntgj
Dec 31 13:44:10.933: INFO: Got endpoints: latency-svc-kntgj [595.71259ms]
Dec 31 13:44:11.092: INFO: Created: latency-svc-4j2sb
Dec 31 13:44:11.102: INFO: Got endpoints: latency-svc-4j2sb [764.320545ms]
Dec 31 13:44:11.152: INFO: Created: latency-svc-czq27
Dec 31 13:44:11.154: INFO: Got endpoints: latency-svc-czq27 [815.908404ms]
Dec 31 13:44:11.305: INFO: Created: latency-svc-rbbw8
Dec 31 13:44:11.309: INFO: Got endpoints: latency-svc-rbbw8 [970.591552ms]
Dec 31 13:44:11.372: INFO: Created: latency-svc-nn6ss
Dec 31 13:44:11.473: INFO: Got endpoints: latency-svc-nn6ss [1.134227981s]
Dec 31 13:44:11.494: INFO: Created: latency-svc-m2bzf
Dec 31 13:44:11.537: INFO: Got endpoints: latency-svc-m2bzf [1.198057946s]
Dec 31 13:44:11.542: INFO: Created: latency-svc-k4v27
Dec 31 13:44:11.548: INFO: Got endpoints: latency-svc-k4v27 [1.209788655s]
Dec 31 13:44:11.648: INFO: Created: latency-svc-khnn9
Dec 31 13:44:11.663: INFO: Got endpoints: latency-svc-khnn9 [1.324380754s]
Dec 31 13:44:11.710: INFO: Created: latency-svc-4sghn
Dec 31 13:44:11.713: INFO: Got endpoints: latency-svc-4sghn [1.375920265s]
Dec 31 13:44:11.938: INFO: Created: latency-svc-xxjjn
Dec 31 13:44:11.938: INFO: Got endpoints: latency-svc-xxjjn [1.600784571s]
Dec 31 13:44:12.049: INFO: Created: latency-svc-pd4f5
Dec 31 13:44:12.097: INFO: Got endpoints: latency-svc-pd4f5 [1.757958598s]
Dec 31 13:44:12.101: INFO: Created: latency-svc-khmbr
Dec 31 13:44:12.126: INFO: Got endpoints: latency-svc-khmbr [1.663233613s]
Dec 31 13:44:12.219: INFO: Created: latency-svc-hzm8k
Dec 31 13:44:12.223: INFO: Got endpoints: latency-svc-hzm8k [1.618036654s]
Dec 31 13:44:12.304: INFO: Created: latency-svc-xtrcf
Dec 31 13:44:12.308: INFO: Got endpoints: latency-svc-xtrcf [1.524010014s]
Dec 31 13:44:12.391: INFO: Created: latency-svc-vp4r7
Dec 31 13:44:12.396: INFO: Got endpoints: latency-svc-vp4r7 [1.56163335s]
Dec 31 13:44:12.468: INFO: Created: latency-svc-4xw5l
Dec 31 13:44:12.574: INFO: Got endpoints: latency-svc-4xw5l [1.640755085s]
Dec 31 13:44:12.575: INFO: Created: latency-svc-6kxbw
Dec 31 13:44:12.585: INFO: Got endpoints: latency-svc-6kxbw [1.483355218s]
Dec 31 13:44:12.776: INFO: Created: latency-svc-78v7j
Dec 31 13:44:12.778: INFO: Got endpoints: latency-svc-78v7j [1.623897782s]
Dec 31 13:44:12.929: INFO: Created: latency-svc-r4fqz
Dec 31 13:44:12.933: INFO: Got endpoints: latency-svc-r4fqz [1.623437778s]
Dec 31 13:44:13.078: INFO: Created: latency-svc-4sd8z
Dec 31 13:44:13.087: INFO: Got endpoints: latency-svc-4sd8z [1.613849647s]
Dec 31 13:44:13.152: INFO: Created: latency-svc-bbjhx
Dec 31 13:44:13.275: INFO: Got endpoints: latency-svc-bbjhx [1.738143542s]
Dec 31 13:44:13.315: INFO: Created: latency-svc-gx77k
Dec 31 13:44:13.329: INFO: Got endpoints: latency-svc-gx77k [1.780259128s]
Dec 31 13:44:13.441: INFO: Created: latency-svc-kbglp
Dec 31 13:44:13.470: INFO: Got endpoints: latency-svc-kbglp [1.807026002s]
Dec 31 13:44:13.475: INFO: Created: latency-svc-j8vlr
Dec 31 13:44:13.495: INFO: Got endpoints: latency-svc-j8vlr [1.781550396s]
Dec 31 13:44:13.636: INFO: Created: latency-svc-pzcd2
Dec 31 13:44:13.650: INFO: Got endpoints: latency-svc-pzcd2 [1.711456114s]
Dec 31 13:44:13.795: INFO: Created: latency-svc-9xdtm
Dec 31 13:44:13.824: INFO: Got endpoints: latency-svc-9xdtm [1.727007299s]
Dec 31 13:44:14.058: INFO: Created: latency-svc-k4tm8
Dec 31 13:44:14.067: INFO: Got endpoints: latency-svc-k4tm8 [1.94064678s]
Dec 31 13:44:14.150: INFO: Created: latency-svc-ksmmj
Dec 31 13:44:14.150: INFO: Got endpoints: latency-svc-ksmmj [1.927625481s]
Dec 31 13:44:14.258: INFO: Created: latency-svc-nnbnp
Dec 31 13:44:14.267: INFO: Got endpoints: latency-svc-nnbnp [1.959659599s]
Dec 31 13:44:14.301: INFO: Created: latency-svc-4z7m8
Dec 31 13:44:14.313: INFO: Got endpoints: latency-svc-4z7m8 [1.91647768s]
Dec 31 13:44:14.345: INFO: Created: latency-svc-4q2fh
Dec 31 13:44:14.356: INFO: Got endpoints: latency-svc-4q2fh [1.78067907s]
Dec 31 13:44:14.521: INFO: Created: latency-svc-ntkv5
Dec 31 13:44:14.536: INFO: Got endpoints: latency-svc-ntkv5 [1.95031338s]
Dec 31 13:44:14.595: INFO: Created: latency-svc-vddc7
Dec 31 13:44:14.656: INFO: Got endpoints: latency-svc-vddc7 [1.877584205s]
Dec 31 13:44:14.706: INFO: Created: latency-svc-c584b
Dec 31 13:44:14.738: INFO: Got endpoints: latency-svc-c584b [1.805262912s]
Dec 31 13:44:14.903: INFO: Created: latency-svc-td9zx
Dec 31 13:44:14.903: INFO: Got endpoints: latency-svc-td9zx [1.816010684s]
Dec 31 13:44:15.067: INFO: Created: latency-svc-52zs4
Dec 31 13:44:15.082: INFO: Got endpoints: latency-svc-52zs4 [343.871287ms]
Dec 31 13:44:15.137: INFO: Created: latency-svc-2w8xq
Dec 31 13:44:15.146: INFO: Got endpoints: latency-svc-2w8xq [1.870953523s]
Dec 31 13:44:15.264: INFO: Created: latency-svc-tztcv
Dec 31 13:44:15.264: INFO: Got endpoints: latency-svc-tztcv [1.935402339s]
Dec 31 13:44:15.391: INFO: Created: latency-svc-f6ldq
Dec 31 13:44:15.397: INFO: Got endpoints: latency-svc-f6ldq [1.926804729s]
Dec 31 13:44:15.470: INFO: Created: latency-svc-t2n96
Dec 31 13:44:15.519: INFO: Got endpoints: latency-svc-t2n96 [2.023965295s]
Dec 31 13:44:15.585: INFO: Created: latency-svc-2fcpn
Dec 31 13:44:15.587: INFO: Got endpoints: latency-svc-2fcpn [1.936709074s]
Dec 31 13:44:15.716: INFO: Created: latency-svc-ndkdn
Dec 31 13:44:15.726: INFO: Got endpoints: latency-svc-ndkdn [1.902195625s]
Dec 31 13:44:15.878: INFO: Created: latency-svc-mtfcz
Dec 31 13:44:15.888: INFO: Got endpoints: latency-svc-mtfcz [1.821402427s]
Dec 31 13:44:16.063: INFO: Created: latency-svc-km2cq
Dec 31 13:44:16.066: INFO: Got endpoints: latency-svc-km2cq [1.915332382s]
Dec 31 13:44:16.243: INFO: Created: latency-svc-89dbq
Dec 31 13:44:16.265: INFO: Got endpoints: latency-svc-89dbq [1.997505713s]
Dec 31 13:44:16.333: INFO: Created: latency-svc-56tpm
Dec 31 13:44:16.412: INFO: Got endpoints: latency-svc-56tpm [2.098932638s]
Dec 31 13:44:16.483: INFO: Created: latency-svc-pwn65
Dec 31 13:44:16.488: INFO: Got endpoints: latency-svc-pwn65 [2.131983627s]
Dec 31 13:44:16.667: INFO: Created: latency-svc-mmcb4
Dec 31 13:44:16.667: INFO: Got endpoints: latency-svc-mmcb4 [2.130491708s]
Dec 31 13:44:16.845: INFO: Created: latency-svc-hxjhr
Dec 31 13:44:16.856: INFO: Got endpoints: latency-svc-hxjhr [2.199752381s]
Dec 31 13:44:16.999: INFO: Created: latency-svc-8t9pm
Dec 31 13:44:17.012: INFO: Got endpoints: latency-svc-8t9pm [2.108832562s]
Dec 31 13:44:17.089: INFO: Created: latency-svc-mcnzr
Dec 31 13:44:17.135: INFO: Got endpoints: latency-svc-mcnzr [2.052655136s]
Dec 31 13:44:17.164: INFO: Created: latency-svc-knkqm
Dec 31 13:44:17.171: INFO: Got endpoints: latency-svc-knkqm [2.024523804s]
Dec 31 13:44:17.225: INFO: Created: latency-svc-5ck5j
Dec 31 13:44:17.227: INFO: Got endpoints: latency-svc-5ck5j [1.962831693s]
Dec 31 13:44:17.344: INFO: Created: latency-svc-8kkzv
Dec 31 13:44:17.375: INFO: Got endpoints: latency-svc-8kkzv [1.977519313s]
Dec 31 13:44:17.412: INFO: Created: latency-svc-tlt8t
Dec 31 13:44:17.453: INFO: Got endpoints: latency-svc-tlt8t [1.933442665s]
Dec 31 13:44:17.532: INFO: Created: latency-svc-5sh9m
Dec 31 13:44:17.657: INFO: Got endpoints: latency-svc-5sh9m [2.070048474s]
Dec 31 13:44:17.670: INFO: Created: latency-svc-7df9n
Dec 31 13:44:17.697: INFO: Got endpoints: latency-svc-7df9n [1.970574731s]
Dec 31 13:44:17.717: INFO: Created: latency-svc-d6ccb
Dec 31 13:44:17.721: INFO: Got endpoints: latency-svc-d6ccb [1.832173447s]
Dec 31 13:44:17.825: INFO: Created: latency-svc-kncp4
Dec 31 13:44:17.849: INFO: Got endpoints: latency-svc-kncp4 [1.78292205s]
Dec 31 13:44:18.016: INFO: Created: latency-svc-grkcg
Dec 31 13:44:18.035: INFO: Got endpoints: latency-svc-grkcg [1.769533854s]
Dec 31 13:44:18.167: INFO: Created: latency-svc-wz47c
Dec 31 13:44:18.173: INFO: Got endpoints: latency-svc-wz47c [1.761046393s]
Dec 31 13:44:18.317: INFO: Created: latency-svc-j75s8
Dec 31 13:44:18.319: INFO: Got endpoints: latency-svc-j75s8 [1.831023216s]
Dec 31 13:44:18.373: INFO: Created: latency-svc-h8krz
Dec 31 13:44:18.384: INFO: Got endpoints: latency-svc-h8krz [1.716729185s]
Dec 31 13:44:18.567: INFO: Created: latency-svc-rxp9l
Dec 31 13:44:18.616: INFO: Got endpoints: latency-svc-rxp9l [1.759398106s]
Dec 31 13:44:18.624: INFO: Created: latency-svc-wnjm8
Dec 31 13:44:18.631: INFO: Got endpoints: latency-svc-wnjm8 [1.618716849s]
Dec 31 13:44:18.719: INFO: Created: latency-svc-4gwlg
Dec 31 13:44:18.768: INFO: Got endpoints: latency-svc-4gwlg [1.632761633s]
Dec 31 13:44:18.884: INFO: Created: latency-svc-5tcrp
Dec 31 13:44:18.922: INFO: Got endpoints: latency-svc-5tcrp [1.751250809s]
Dec 31 13:44:19.058: INFO: Created: latency-svc-xzqjq
Dec 31 13:44:19.065: INFO: Got endpoints: latency-svc-xzqjq [1.837434057s]
Dec 31 13:44:19.205: INFO: Created: latency-svc-ssgvn
Dec 31 13:44:19.263: INFO: Got endpoints: latency-svc-ssgvn [1.888046714s]
Dec 31 13:44:19.282: INFO: Created: latency-svc-qw2gm
Dec 31 13:44:19.293: INFO: Got endpoints: latency-svc-qw2gm [1.839758227s]
Dec 31 13:44:19.393: INFO: Created: latency-svc-5pv62
Dec 31 13:44:19.398: INFO: Got endpoints: latency-svc-5pv62 [1.741487331s]
Dec 31 13:44:19.449: INFO: Created: latency-svc-r4gdg
Dec 31 13:44:19.504: INFO: Got endpoints: latency-svc-r4gdg [1.806979755s]
Dec 31 13:44:19.548: INFO: Created: latency-svc-bkptq
Dec 31 13:44:19.554: INFO: Got endpoints: latency-svc-bkptq [1.833320391s]
Dec 31 13:44:19.670: INFO: Created: latency-svc-2dh9n
Dec 31 13:44:19.674: INFO: Got endpoints: latency-svc-2dh9n [1.824708909s]
Dec 31 13:44:19.751: INFO: Created: latency-svc-znkmc
Dec 31 13:44:19.823: INFO: Created: latency-svc-5jp4x
Dec 31 13:44:19.839: INFO: Got endpoints: latency-svc-znkmc [1.803991925s]
Dec 31 13:44:19.871: INFO: Got endpoints: latency-svc-5jp4x [1.697635284s]
Dec 31 13:44:19.999: INFO: Created: latency-svc-h55t4
Dec 31 13:44:20.015: INFO: Got endpoints: latency-svc-h55t4 [1.695417411s]
Dec 31 13:44:20.184: INFO: Created: latency-svc-xvvtw
Dec 31 13:44:20.195: INFO: Got endpoints: latency-svc-xvvtw [1.811397344s]
Dec 31 13:44:20.241: INFO: Created: latency-svc-h26cw
Dec 31 13:44:20.363: INFO: Got endpoints: latency-svc-h26cw [1.74697536s]
Dec 31 13:44:20.365: INFO: Created: latency-svc-vnpd5
Dec 31 13:44:20.383: INFO: Got endpoints: latency-svc-vnpd5 [1.751693077s]
Dec 31 13:44:20.435: INFO: Created: latency-svc-8pxfq
Dec 31 13:44:20.457: INFO: Got endpoints: latency-svc-8pxfq [1.688825008s]
Dec 31 13:44:20.615: INFO: Created: latency-svc-gx9xt
Dec 31 13:44:20.666: INFO: Got endpoints: latency-svc-gx9xt [1.743474318s]
Dec 31 13:44:20.674: INFO: Created: latency-svc-8l869
Dec 31 13:44:20.799: INFO: Got endpoints: latency-svc-8l869 [1.734201385s]
Dec 31 13:44:20.815: INFO: Created: latency-svc-6mxkp
Dec 31 13:44:20.825: INFO: Got endpoints: latency-svc-6mxkp [1.56149285s]
Dec 31 13:44:20.908: INFO: Created: latency-svc-jqkdd
Dec 31 13:44:21.026: INFO: Got endpoints: latency-svc-jqkdd [1.733308858s]
Dec 31 13:44:21.045: INFO: Created: latency-svc-8p6lt
Dec 31 13:44:21.066: INFO: Got endpoints: latency-svc-8p6lt [1.66777607s]
Dec 31 13:44:21.085: INFO: Created: latency-svc-n7jgf
Dec 31 13:44:21.255: INFO: Created: latency-svc-gx8m9
Dec 31 13:44:21.262: INFO: Got endpoints: latency-svc-n7jgf [1.757296948s]
Dec 31 13:44:21.335: INFO: Got endpoints: latency-svc-gx8m9 [1.780564523s]
Dec 31 13:44:21.339: INFO: Created: latency-svc-2jvf8
Dec 31 13:44:21.506: INFO: Got endpoints: latency-svc-2jvf8 [1.831522332s]
Dec 31 13:44:21.517: INFO: Created: latency-svc-g5mh5
Dec 31 13:44:21.532: INFO: Got endpoints: latency-svc-g5mh5 [1.692159483s]
Dec 31 13:44:21.578: INFO: Created: latency-svc-drlkn
Dec 31 13:44:21.585: INFO: Got endpoints: latency-svc-drlkn [1.713683602s]
Dec 31 13:44:21.800: INFO: Created: latency-svc-wpks4
Dec 31 13:44:21.842: INFO: Got endpoints: latency-svc-wpks4 [1.827741734s]
Dec 31 13:44:21.848: INFO: Created: latency-svc-hxmqb
Dec 31 13:44:21.875: INFO: Got endpoints: latency-svc-hxmqb [1.679868074s]
Dec 31 13:44:22.092: INFO: Created: latency-svc-lhnkd
Dec 31 13:44:22.104: INFO: Got endpoints: latency-svc-lhnkd [1.74010189s]
Dec 31 13:44:22.288: INFO: Created: latency-svc-p68f9
Dec 31 13:44:22.323: INFO: Got endpoints: latency-svc-p68f9 [1.939945799s]
Dec 31 13:44:22.331: INFO: Created: latency-svc-69r5p
Dec 31 13:44:22.335: INFO: Got endpoints: latency-svc-69r5p [1.877188344s]
Dec 31 13:44:22.429: INFO: Created: latency-svc-dd42f
Dec 31 13:44:22.455: INFO: Got endpoints: latency-svc-dd42f [1.787831898s]
Dec 31 13:44:22.486: INFO: Created: latency-svc-r2fj7
Dec 31 13:44:22.492: INFO: Got endpoints: latency-svc-r2fj7 [1.692086247s]
Dec 31 13:44:22.525: INFO: Created: latency-svc-lzxlq
Dec 31 13:44:22.646: INFO: Got endpoints: latency-svc-lzxlq [1.820702445s]
Dec 31 13:44:22.660: INFO: Created: latency-svc-vdghm
Dec 31 13:44:22.687: INFO: Got endpoints: latency-svc-vdghm [1.660772192s]
Dec 31 13:44:22.717: INFO: Created: latency-svc-4jslx
Dec 31 13:44:22.727: INFO: Got endpoints: latency-svc-4jslx [1.660796847s]
Dec 31 13:44:22.852: INFO: Created: latency-svc-dpzj6
Dec 31 13:44:22.853: INFO: Got endpoints: latency-svc-dpzj6 [1.59136447s]
Dec 31 13:44:22.913: INFO: Created: latency-svc-hd5kg
Dec 31 13:44:22.921: INFO: Got endpoints: latency-svc-hd5kg [1.586031807s]
Dec 31 13:44:23.086: INFO: Created: latency-svc-hsvfr
Dec 31 13:44:23.101: INFO: Got endpoints: latency-svc-hsvfr [1.594955326s]
Dec 31 13:44:23.183: INFO: Created: latency-svc-f4mhr
Dec 31 13:44:23.389: INFO: Got endpoints: latency-svc-f4mhr [1.856904753s]
Dec 31 13:44:23.406: INFO: Created: latency-svc-rs5vh
Dec 31 13:44:23.455: INFO: Got endpoints: latency-svc-rs5vh [1.870050419s]
Dec 31 13:44:23.458: INFO: Created: latency-svc-k5b2d
Dec 31 13:44:23.685: INFO: Got endpoints: latency-svc-k5b2d [1.842232635s]
Dec 31 13:44:23.741: INFO: Created: latency-svc-mnll9
Dec 31 13:44:23.741: INFO: Got endpoints: latency-svc-mnll9 [1.865437908s]
Dec 31 13:44:24.016: INFO: Created: latency-svc-j95rh
Dec 31 13:44:24.229: INFO: Got endpoints: latency-svc-j95rh [2.125009213s]
Dec 31 13:44:24.256: INFO: Created: latency-svc-dpq2g
Dec 31 13:44:24.273: INFO: Got endpoints: latency-svc-dpq2g [1.94996932s]
Dec 31 13:44:24.398: INFO: Created: latency-svc-7mxww
Dec 31 13:44:24.425: INFO: Created: latency-svc-ndmck
Dec 31 13:44:24.426: INFO: Got endpoints: latency-svc-7mxww [2.090766472s]
Dec 31 13:44:24.440: INFO: Got endpoints: latency-svc-ndmck [1.984774051s]
Dec 31 13:44:24.618: INFO: Created: latency-svc-s2fh2
Dec 31 13:44:24.660: INFO: Got endpoints: latency-svc-s2fh2 [2.167657026s]
Dec 31 13:44:24.672: INFO: Created: latency-svc-5v2hm
Dec 31 13:44:24.691: INFO: Got endpoints: latency-svc-5v2hm [2.045485228s]
Dec 31 13:44:24.855: INFO: Created: latency-svc-bjdn8
Dec 31 13:44:24.919: INFO: Got endpoints: latency-svc-bjdn8 [2.231712854s]
Dec 31 13:44:24.931: INFO: Created: latency-svc-f9vgh
Dec 31 13:44:24.931: INFO: Got endpoints: latency-svc-f9vgh [2.204022933s]
Dec 31 13:44:25.138: INFO: Created: latency-svc-bxlcl
Dec 31 13:44:25.151: INFO: Got endpoints: latency-svc-bxlcl [2.297514428s]
Dec 31 13:44:25.201: INFO: Created: latency-svc-bvxwd
Dec 31 13:44:25.341: INFO: Got endpoints: latency-svc-bvxwd [2.419551356s]
Dec 31 13:44:25.353: INFO: Created: latency-svc-4kxrv
Dec 31 13:44:25.372: INFO: Got endpoints: latency-svc-4kxrv [2.270592122s]
Dec 31 13:44:25.394: INFO: Created: latency-svc-ch6c5
Dec 31 13:44:25.405: INFO: Got endpoints: latency-svc-ch6c5 [2.015767383s]
Dec 31 13:44:25.439: INFO: Created: latency-svc-mfscd
Dec 31 13:44:25.569: INFO: Got endpoints: latency-svc-mfscd [2.113136101s]
Dec 31 13:44:25.580: INFO: Created: latency-svc-xd9vf
Dec 31 13:44:25.619: INFO: Got endpoints: latency-svc-xd9vf [1.933901266s]
Dec 31 13:44:25.656: INFO: Created: latency-svc-w6vc8
Dec 31 13:44:25.658: INFO: Got endpoints: latency-svc-w6vc8 [1.916623638s]
Dec 31 13:44:25.778: INFO: Created: latency-svc-rrkm9
Dec 31 13:44:25.785: INFO: Got endpoints: latency-svc-rrkm9 [1.556429918s]
Dec 31 13:44:25.837: INFO: Created: latency-svc-p9tns
Dec 31 13:44:25.855: INFO: Got endpoints: latency-svc-p9tns [1.581895651s]
Dec 31 13:44:26.005: INFO: Created: latency-svc-6g6lk
Dec 31 13:44:26.011: INFO: Got endpoints: latency-svc-6g6lk [1.584597159s]
Dec 31 13:44:26.068: INFO: Created: latency-svc-zstgm
Dec 31 13:44:26.070: INFO: Got endpoints: latency-svc-zstgm [1.630005525s]
Dec 31 13:44:26.180: INFO: Created: latency-svc-4s9sd
Dec 31 13:44:26.195: INFO: Got endpoints: latency-svc-4s9sd [1.534424216s]
Dec 31 13:44:26.268: INFO: Created: latency-svc-ddqv7
Dec 31 13:44:26.340: INFO: Got endpoints: latency-svc-ddqv7 [1.648476903s]
Dec 31 13:44:26.369: INFO: Created: latency-svc-m2bdk
Dec 31 13:44:26.389: INFO: Got endpoints: latency-svc-m2bdk [1.469445418s]
Dec 31 13:44:26.532: INFO: Created: latency-svc-m6tdn
Dec 31 13:44:26.569: INFO: Got endpoints: latency-svc-m6tdn [1.637709885s]
Dec 31 13:44:26.742: INFO: Created: latency-svc-h2prz
Dec 31 13:44:26.799: INFO: Got endpoints: latency-svc-h2prz [1.647588225s]
Dec 31 13:44:26.815: INFO: Created: latency-svc-hs9sk
Dec 31 13:44:26.815: INFO: Got endpoints: latency-svc-hs9sk [1.473602122s]
Dec 31 13:44:27.039: INFO: Created: latency-svc-6jcv8
Dec 31 13:44:27.058: INFO: Got endpoints: latency-svc-6jcv8 [1.685854776s]
Dec 31 13:44:27.148: INFO: Created: latency-svc-mxt2z
Dec 31 13:44:27.167: INFO: Got endpoints: latency-svc-mxt2z [1.76234551s]
Dec 31 13:44:27.228: INFO: Created: latency-svc-j5dsf
Dec 31 13:44:27.245: INFO: Got endpoints: latency-svc-j5dsf [1.676487838s]
Dec 31 13:44:27.375: INFO: Created: latency-svc-s7h2p
Dec 31 13:44:27.417: INFO: Got endpoints: latency-svc-s7h2p [1.797460003s]
Dec 31 13:44:27.586: INFO: Created: latency-svc-rjzg9
Dec 31 13:44:27.636: INFO: Got endpoints: latency-svc-rjzg9 [1.9776913s]
Dec 31 13:44:27.677: INFO: Created: latency-svc-9hdq8
Dec 31 13:44:27.736: INFO: Got endpoints: latency-svc-9hdq8 [1.950135041s]
Dec 31 13:44:27.774: INFO: Created: latency-svc-6c7d2
Dec 31 13:44:27.806: INFO: Created: latency-svc-xn2xv
Dec 31 13:44:27.806: INFO: Got endpoints: latency-svc-6c7d2 [1.950117653s]
Dec 31 13:44:27.810: INFO: Got endpoints: latency-svc-xn2xv [1.79918995s]
Dec 31 13:44:27.921: INFO: Created: latency-svc-hdskx
Dec 31 13:44:27.944: INFO: Got endpoints: latency-svc-hdskx [1.874465274s]
Dec 31 13:44:27.980: INFO: Created: latency-svc-rcxl8
Dec 31 13:44:28.136: INFO: Got endpoints: latency-svc-rcxl8 [1.941241202s]
Dec 31 13:44:28.140: INFO: Created: latency-svc-77fk6
Dec 31 13:44:28.147: INFO: Got endpoints: latency-svc-77fk6 [1.807154226s]
Dec 31 13:44:28.217: INFO: Created: latency-svc-t2jgd
Dec 31 13:44:28.232: INFO: Got endpoints: latency-svc-t2jgd [1.842580592s]
Dec 31 13:44:28.376: INFO: Created: latency-svc-h7r98
Dec 31 13:44:28.383: INFO: Got endpoints: latency-svc-h7r98 [1.81334973s]
Dec 31 13:44:28.463: INFO: Created: latency-svc-bgdvg
Dec 31 13:44:28.533: INFO: Got endpoints: latency-svc-bgdvg [1.733803707s]
Dec 31 13:44:28.572: INFO: Created: latency-svc-hn5jr
Dec 31 13:44:28.579: INFO: Got endpoints: latency-svc-hn5jr [1.764525171s]
Dec 31 13:44:28.713: INFO: Created: latency-svc-644sz
Dec 31 13:44:28.718: INFO: Got endpoints: latency-svc-644sz [1.659784353s]
Dec 31 13:44:28.777: INFO: Created: latency-svc-xc5hc
Dec 31 13:44:28.789: INFO: Got endpoints: latency-svc-xc5hc [1.621570831s]
Dec 31 13:44:29.023: INFO: Created: latency-svc-dt8wl
Dec 31 13:44:29.110: INFO: Created: latency-svc-q964j
Dec 31 13:44:29.111: INFO: Got endpoints: latency-svc-dt8wl [1.865152963s]
Dec 31 13:44:29.120: INFO: Got endpoints: latency-svc-q964j [1.703584365s]
Dec 31 13:44:29.251: INFO: Created: latency-svc-xnbqw
Dec 31 13:44:29.262: INFO: Got endpoints: latency-svc-xnbqw [1.625889499s]
Dec 31 13:44:29.324: INFO: Created: latency-svc-s2jlm
Dec 31 13:44:29.331: INFO: Got endpoints: latency-svc-s2jlm [1.595106333s]
Dec 31 13:44:29.407: INFO: Created: latency-svc-2zgbp
Dec 31 13:44:29.464: INFO: Got endpoints: latency-svc-2zgbp [1.658315784s]
Dec 31 13:44:29.473: INFO: Created: latency-svc-b64fw
Dec 31 13:44:29.481: INFO: Got endpoints: latency-svc-b64fw [1.67090503s]
Dec 31 13:44:29.590: INFO: Created: latency-svc-xg7qq
Dec 31 13:44:29.598: INFO: Got endpoints: latency-svc-xg7qq [1.653392429s]
Dec 31 13:44:29.731: INFO: Created: latency-svc-cj6hx
Dec 31 13:44:29.773: INFO: Got endpoints: latency-svc-cj6hx [1.636303552s]
Dec 31 13:44:29.797: INFO: Created: latency-svc-z5jwn
Dec 31 13:44:29.871: INFO: Got endpoints: latency-svc-z5jwn [1.723397967s]
Dec 31 13:44:29.888: INFO: Created: latency-svc-t4nr2
Dec 31 13:44:29.902: INFO: Got endpoints: latency-svc-t4nr2 [1.67044748s]
Dec 31 13:44:29.950: INFO: Created: latency-svc-h8h2k
Dec 31 13:44:30.083: INFO: Got endpoints: latency-svc-h8h2k [1.699372292s]
Dec 31 13:44:30.119: INFO: Created: latency-svc-fpnlb
Dec 31 13:44:30.127: INFO: Got endpoints: latency-svc-fpnlb [1.594151652s]
Dec 31 13:44:30.173: INFO: Created: latency-svc-56n7v
Dec 31 13:44:30.251: INFO: Got endpoints: latency-svc-56n7v [1.671762878s]
Dec 31 13:44:30.287: INFO: Created: latency-svc-t46tf
Dec 31 13:44:30.290: INFO: Got endpoints: latency-svc-t46tf [1.570713102s]
Dec 31 13:44:30.324: INFO: Created: latency-svc-48j97
Dec 31 13:44:30.342: INFO: Got endpoints: latency-svc-48j97 [1.552295327s]
Dec 31 13:44:30.427: INFO: Created: latency-svc-mfgcm
Dec 31 13:44:30.431: INFO: Got endpoints: latency-svc-mfgcm [1.319982459s]
Dec 31 13:44:30.488: INFO: Created: latency-svc-kvmz7
Dec 31 13:44:30.597: INFO: Got endpoints: latency-svc-kvmz7 [1.477092882s]
Dec 31 13:44:30.648: INFO: Created: latency-svc-g8wfr
Dec 31 13:44:30.663: INFO: Got endpoints: latency-svc-g8wfr [1.400759783s]
Dec 31 13:44:30.794: INFO: Created: latency-svc-q2khm
Dec 31 13:44:30.802: INFO: Got endpoints: latency-svc-q2khm [1.470368447s]
Dec 31 13:44:30.844: INFO: Created: latency-svc-pcpdl
Dec 31 13:44:30.858: INFO: Got endpoints: latency-svc-pcpdl [1.393685026s]
Dec 31 13:44:31.097: INFO: Created: latency-svc-whq62
Dec 31 13:44:31.098: INFO: Got endpoints: latency-svc-whq62 [1.617317218s]
Dec 31 13:44:31.275: INFO: Created: latency-svc-fctz2
Dec 31 13:44:31.280: INFO: Got endpoints: latency-svc-fctz2 [1.681514518s]
Dec 31 13:44:31.442: INFO: Created: latency-svc-vn7w7
Dec 31 13:44:31.455: INFO: Got endpoints: latency-svc-vn7w7 [1.682398908s]
Dec 31 13:44:31.592: INFO: Created: latency-svc-bswh7
Dec 31 13:44:31.601: INFO: Got endpoints: latency-svc-bswh7 [1.729563792s]
Dec 31 13:44:31.656: INFO: Created: latency-svc-twr9d
Dec 31 13:44:31.666: INFO: Got endpoints: latency-svc-twr9d [1.763302575s]
Dec 31 13:44:31.815: INFO: Created: latency-svc-lb7c2
Dec 31 13:44:31.824: INFO: Got endpoints: latency-svc-lb7c2 [1.740500065s]
Dec 31 13:44:31.927: INFO: Created: latency-svc-7wdsq
Dec 31 13:44:31.929: INFO: Got endpoints: latency-svc-7wdsq [1.801516856s]
Dec 31 13:44:32.014: INFO: Created: latency-svc-xskxl
Dec 31 13:44:32.123: INFO: Got endpoints: latency-svc-xskxl [1.871409917s]
Dec 31 13:44:32.141: INFO: Created: latency-svc-d2975
Dec 31 13:44:32.172: INFO: Got endpoints: latency-svc-d2975 [1.882619656s]
Dec 31 13:44:32.177: INFO: Created: latency-svc-9qjgr
Dec 31 13:44:32.262: INFO: Created: latency-svc-bzw8b
Dec 31 13:44:32.271: INFO: Got endpoints: latency-svc-bzw8b [1.840148036s]
Dec 31 13:44:32.272: INFO: Got endpoints: latency-svc-9qjgr [1.929820351s]
Dec 31 13:44:32.309: INFO: Created: latency-svc-n5zzn
Dec 31 13:44:32.324: INFO: Got endpoints: latency-svc-n5zzn [1.726206286s]
Dec 31 13:44:32.419: INFO: Created: latency-svc-h5bdc
Dec 31 13:44:32.435: INFO: Got endpoints: latency-svc-h5bdc [1.772094247s]
Dec 31 13:44:32.482: INFO: Created: latency-svc-nrsfh
Dec 31 13:44:32.485: INFO: Got endpoints: latency-svc-nrsfh [1.682786492s]
Dec 31 13:44:32.690: INFO: Created: latency-svc-gcf7k
Dec 31 13:44:32.719: INFO: Got endpoints: latency-svc-gcf7k [1.861064023s]
Dec 31 13:44:32.940: INFO: Created: latency-svc-89cqd
Dec 31 13:44:32.967: INFO: Got endpoints: latency-svc-89cqd [1.868203455s]
Dec 31 13:44:33.113: INFO: Created: latency-svc-vpx4p
Dec 31 13:44:33.125: INFO: Got endpoints: latency-svc-vpx4p [1.844181216s]
Dec 31 13:44:33.191: INFO: Created: latency-svc-hrjnm
Dec 31 13:44:33.242: INFO: Got endpoints: latency-svc-hrjnm [1.786878099s]
Dec 31 13:44:33.281: INFO: Created: latency-svc-t4gmc
Dec 31 13:44:33.289: INFO: Got endpoints: latency-svc-t4gmc [1.687689838s]
Dec 31 13:44:33.344: INFO: Created: latency-svc-vcg62
Dec 31 13:44:33.381: INFO: Got endpoints: latency-svc-vcg62 [1.714816927s]
Dec 31 13:44:33.419: INFO: Created: latency-svc-thrjz
Dec 31 13:44:33.423: INFO: Got endpoints: latency-svc-thrjz [1.599064887s]
Dec 31 13:44:33.550: INFO: Created: latency-svc-6bcts
Dec 31 13:44:33.566: INFO: Got endpoints: latency-svc-6bcts [1.636982248s]
Dec 31 13:44:33.599: INFO: Created: latency-svc-c5qlt
Dec 31 13:44:33.613: INFO: Got endpoints: latency-svc-c5qlt [1.489896284s]
Dec 31 13:44:33.659: INFO: Created: latency-svc-b8dwn
Dec 31 13:44:33.710: INFO: Got endpoints: latency-svc-b8dwn [1.537513369s]
Dec 31 13:44:33.742: INFO: Created: latency-svc-fnshn
Dec 31 13:44:33.745: INFO: Got endpoints: latency-svc-fnshn [1.473426499s]
Dec 31 13:44:33.816: INFO: Created: latency-svc-4kz4z
Dec 31 13:44:33.876: INFO: Got endpoints: latency-svc-4kz4z [1.603864228s]
Dec 31 13:44:33.876: INFO: Latencies: [125.016505ms 265.52041ms 343.871287ms 444.826281ms 496.049997ms 595.71259ms 764.320545ms 815.908404ms 970.591552ms 1.134227981s 1.198057946s 1.209788655s 1.319982459s 1.324380754s 1.375920265s 1.393685026s 1.400759783s 1.469445418s 1.470368447s 1.473426499s 1.473602122s 1.477092882s 1.483355218s 1.489896284s 1.524010014s 1.534424216s 1.537513369s 1.552295327s 1.556429918s 1.56149285s 1.56163335s 1.570713102s 1.581895651s 1.584597159s 1.586031807s 1.59136447s 1.594151652s 1.594955326s 1.595106333s 1.599064887s 1.600784571s 1.603864228s 1.613849647s 1.617317218s 1.618036654s 1.618716849s 1.621570831s 1.623437778s 1.623897782s 1.625889499s 1.630005525s 1.632761633s 1.636303552s 1.636982248s 1.637709885s 1.640755085s 1.647588225s 1.648476903s 1.653392429s 1.658315784s 1.659784353s 1.660772192s 1.660796847s 1.663233613s 1.66777607s 1.67044748s 1.67090503s 1.671762878s 1.676487838s 1.679868074s 1.681514518s 1.682398908s 1.682786492s 1.685854776s 1.687689838s 1.688825008s 1.692086247s 1.692159483s 1.695417411s 1.697635284s 1.699372292s 1.703584365s 1.711456114s 1.713683602s 1.714816927s 1.716729185s 1.723397967s 1.726206286s 1.727007299s 1.729563792s 1.733308858s 1.733803707s 1.734201385s 1.738143542s 1.74010189s 1.740500065s 1.741487331s 1.743474318s 1.74697536s 1.751250809s 1.751693077s 1.757296948s 1.757958598s 1.759398106s 1.761046393s 1.76234551s 1.763302575s 1.764525171s 1.769533854s 1.772094247s 1.780259128s 1.780564523s 1.78067907s 1.781550396s 1.78292205s 1.786878099s 1.787831898s 1.797460003s 1.79918995s 1.801516856s 1.803991925s 1.805262912s 1.806979755s 1.807026002s 1.807154226s 1.811397344s 1.81334973s 1.816010684s 1.820702445s 1.821402427s 1.824708909s 1.827741734s 1.831023216s 1.831522332s 1.832173447s 1.833320391s 1.837434057s 1.839758227s 1.840148036s 1.842232635s 1.842580592s 1.844181216s 1.856904753s 1.861064023s 1.865152963s 1.865437908s 1.868203455s 1.870050419s 1.870953523s 1.871409917s 1.874465274s 1.877188344s 1.877584205s 1.882619656s 1.888046714s 1.902195625s 1.915332382s 1.91647768s 1.916623638s 1.926804729s 1.927625481s 1.929820351s 1.933442665s 1.933901266s 1.935402339s 1.936709074s 1.939945799s 1.94064678s 1.941241202s 1.94996932s 1.950117653s 1.950135041s 1.95031338s 1.959659599s 1.962831693s 1.970574731s 1.977519313s 1.9776913s 1.984774051s 1.997505713s 2.015767383s 2.023965295s 2.024523804s 2.045485228s 2.052655136s 2.070048474s 2.090766472s 2.098932638s 2.108832562s 2.113136101s 2.125009213s 2.130491708s 2.131983627s 2.167657026s 2.199752381s 2.204022933s 2.231712854s 2.270592122s 2.297514428s 2.419551356s]
Dec 31 13:44:33.876: INFO: 50 %ile: 1.751693077s
Dec 31 13:44:33.876: INFO: 90 %ile: 2.015767383s
Dec 31 13:44:33.876: INFO: 99 %ile: 2.297514428s
Dec 31 13:44:33.876: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:44:33.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8833" for this suite.
Dec 31 13:45:11.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:45:12.050: INFO: namespace svc-latency-8833 deletion completed in 38.154639178s

• [SLOW TEST:70.007 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:45:12.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 31 13:45:21.182: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:45:21.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6891" for this suite.
Dec 31 13:45:27.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:45:27.403: INFO: namespace container-runtime-6891 deletion completed in 6.191755604s

• [SLOW TEST:15.353 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:45:27.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 31 13:45:47.724: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:47.724: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:48.101: INFO: Exec stderr: ""
Dec 31 13:45:48.101: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:48.101: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:48.491: INFO: Exec stderr: ""
Dec 31 13:45:48.491: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:48.491: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:49.193: INFO: Exec stderr: ""
Dec 31 13:45:49.193: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:49.193: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:49.475: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 31 13:45:49.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:49.475: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:49.777: INFO: Exec stderr: ""
Dec 31 13:45:49.777: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:49.777: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:50.160: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 31 13:45:50.160: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:50.160: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:50.513: INFO: Exec stderr: ""
Dec 31 13:45:50.513: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:50.513: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:50.857: INFO: Exec stderr: ""
Dec 31 13:45:50.857: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:50.857: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:51.151: INFO: Exec stderr: ""
Dec 31 13:45:51.151: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4033 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 13:45:51.151: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 13:45:51.403: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:45:51.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4033" for this suite.
Dec 31 13:46:37.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:46:37.574: INFO: namespace e2e-kubelet-etc-hosts-4033 deletion completed in 46.158183549s

• [SLOW TEST:70.170 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:46:37.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Dec 31 13:46:37.660: INFO: Waiting up to 5m0s for pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab" in namespace "containers-292" to be "success or failure"
Dec 31 13:46:37.727: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab": Phase="Pending", Reason="", readiness=false. Elapsed: 67.738265ms
Dec 31 13:46:39.737: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077471428s
Dec 31 13:46:41.758: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098546763s
Dec 31 13:46:43.768: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10834849s
Dec 31 13:46:45.780: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119758967s
STEP: Saw pod success
Dec 31 13:46:45.780: INFO: Pod "client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab" satisfied condition "success or failure"
Dec 31 13:46:45.784: INFO: Trying to get logs from node iruya-node pod client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab container test-container: 
STEP: delete the pod
Dec 31 13:46:45.965: INFO: Waiting for pod client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab to disappear
Dec 31 13:46:45.984: INFO: Pod client-containers-ae67c4fb-d67c-4fce-8254-96dc7cae70ab no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:46:45.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-292" for this suite.
Dec 31 13:46:52.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:46:52.200: INFO: namespace containers-292 deletion completed in 6.20801535s

• [SLOW TEST:14.626 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:46:52.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479
Dec 31 13:46:52.353: INFO: Pod name my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479: Found 0 pods out of 1
Dec 31 13:46:57.364: INFO: Pod name my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479: Found 1 pods out of 1
Dec 31 13:46:57.364: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479" are running
Dec 31 13:47:01.382: INFO: Pod "my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479-k6dtn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:46:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:46:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:46:52 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-31 13:46:52 +0000 UTC Reason: Message:}])
Dec 31 13:47:01.382: INFO: Trying to dial the pod
Dec 31 13:47:06.428: INFO: Controller my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479: Got expected result from replica 1 [my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479-k6dtn]: "my-hostname-basic-aa881b4e-6d42-4774-b4e4-9d60c171a479-k6dtn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:47:06.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6078" for this suite.
Dec 31 13:47:12.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:47:12.647: INFO: namespace replication-controller-6078 deletion completed in 6.210722101s

• [SLOW TEST:20.446 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:47:12.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 13:47:12.741: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 31 13:47:12.814: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 31 13:47:17.826: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 31 13:47:21.842: INFO: Creating deployment "test-rolling-update-deployment"
Dec 31 13:47:21.848: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 31 13:47:21.876: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 31 13:47:23.905: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 31 13:47:23.912: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396841, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:47:25.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396841, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:47:27.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396842, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713396841, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 13:47:29.919: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 31 13:47:29.930: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4239,SelfLink:/apis/apps/v1/namespaces/deployment-4239/deployments/test-rolling-update-deployment,UID:6f6713c9-3f98-4802-8358-cc88f2a8d809,ResourceVersion:18775067,Generation:1,CreationTimestamp:2019-12-31 13:47:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-31 13:47:22 +0000 UTC 2019-12-31 13:47:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-31 13:47:29 +0000 UTC 2019-12-31 13:47:21 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 31 13:47:29.933: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4239,SelfLink:/apis/apps/v1/namespaces/deployment-4239/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:19a17ec8-059d-4026-aa79-ada71c2ba8a1,ResourceVersion:18775057,Generation:1,CreationTimestamp:2019-12-31 13:47:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6f6713c9-3f98-4802-8358-cc88f2a8d809 0xc000d9e067 0xc000d9e068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 31 13:47:29.933: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 31 13:47:29.933: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4239,SelfLink:/apis/apps/v1/namespaces/deployment-4239/replicasets/test-rolling-update-controller,UID:e96f9475-de2c-40bf-ab0b-f39a87255125,ResourceVersion:18775066,Generation:2,CreationTimestamp:2019-12-31 13:47:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 6f6713c9-3f98-4802-8358-cc88f2a8d809 0xc0017d7f97 0xc0017d7f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 13:47:29.936: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-r969j" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-r969j,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4239,SelfLink:/api/v1/namespaces/deployment-4239/pods/test-rolling-update-deployment-79f6b9d75c-r969j,UID:2c49dfa7-1783-489d-93a3-10dd8b562993,ResourceVersion:18775056,Generation:0,CreationTimestamp:2019-12-31 13:47:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 19a17ec8-059d-4026-aa79-ada71c2ba8a1 0xc000d9e9e7 0xc000d9e9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-69424 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69424,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-69424 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000d9ea60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000d9ea80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:47:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:47:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:47:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 13:47:21 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-31 13:47:21 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-31 13:47:28 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6e70426daa47bf5c5095ab415f95456dfe114500e477072e55da22aea439f602}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:47:29.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4239" for this suite.
Dec 31 13:47:36.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:47:36.152: INFO: namespace deployment-4239 deletion completed in 6.211941779s

• [SLOW TEST:23.504 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:47:36.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-2157
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2157 to expose endpoints map[]
Dec 31 13:47:36.320: INFO: Get endpoints failed (18.129836ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 31 13:47:37.329: INFO: successfully validated that service endpoint-test2 in namespace services-2157 exposes endpoints map[] (1.026575097s elapsed)
STEP: Creating pod pod1 in namespace services-2157
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2157 to expose endpoints map[pod1:[80]]
Dec 31 13:47:41.624: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.261271175s elapsed, will retry)
Dec 31 13:47:45.698: INFO: successfully validated that service endpoint-test2 in namespace services-2157 exposes endpoints map[pod1:[80]] (8.335312105s elapsed)
STEP: Creating pod pod2 in namespace services-2157
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2157 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 31 13:47:51.177: INFO: Unexpected endpoints: found map[16166183-88d1-417a-85e3-343677c34dd4:[80]], expected map[pod1:[80] pod2:[80]] (5.464743279s elapsed, will retry)
Dec 31 13:47:53.221: INFO: successfully validated that service endpoint-test2 in namespace services-2157 exposes endpoints map[pod1:[80] pod2:[80]] (7.508126529s elapsed)
STEP: Deleting pod pod1 in namespace services-2157
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2157 to expose endpoints map[pod2:[80]]
Dec 31 13:47:54.274: INFO: successfully validated that service endpoint-test2 in namespace services-2157 exposes endpoints map[pod2:[80]] (1.042697313s elapsed)
STEP: Deleting pod pod2 in namespace services-2157
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2157 to expose endpoints map[]
Dec 31 13:47:55.410: INFO: successfully validated that service endpoint-test2 in namespace services-2157 exposes endpoints map[] (1.130169175s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:47:55.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2157" for this suite.
Dec 31 13:48:17.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:48:18.122: INFO: namespace services-2157 deletion completed in 22.174976235s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.970 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:48:18.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 31 13:48:26.324: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:48:26.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3770" for this suite.
Dec 31 13:48:32.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:48:32.560: INFO: namespace container-runtime-3770 deletion completed in 6.162040187s

• [SLOW TEST:14.438 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:48:32.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 13:48:44.778: INFO: File wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-bab6318b-ead8-47df-80f8-a2cdd0397132 contains '' instead of 'foo.example.com.'
Dec 31 13:48:44.785: INFO: File jessie_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-bab6318b-ead8-47df-80f8-a2cdd0397132 contains '' instead of 'foo.example.com.'
Dec 31 13:48:44.785: INFO: Lookups using dns-129/dns-test-bab6318b-ead8-47df-80f8-a2cdd0397132 failed for: [wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local jessie_udp@dns-test-service-3.dns-129.svc.cluster.local]

Dec 31 13:48:49.810: INFO: DNS probes using dns-test-bab6318b-ead8-47df-80f8-a2cdd0397132 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 13:49:06.041: INFO: File wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 contains '' instead of 'bar.example.com.'
Dec 31 13:49:06.054: INFO: File jessie_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 contains '' instead of 'bar.example.com.'
Dec 31 13:49:06.054: INFO: Lookups using dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 failed for: [wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local jessie_udp@dns-test-service-3.dns-129.svc.cluster.local]

Dec 31 13:49:11.065: INFO: File wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 31 13:49:11.070: INFO: File jessie_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 31 13:49:11.070: INFO: Lookups using dns-129/dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 failed for: [wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local jessie_udp@dns-test-service-3.dns-129.svc.cluster.local]

Dec 31 13:49:16.076: INFO: DNS probes using dns-test-6a3c31d5-0e7b-4338-b5c6-434563349f64 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-129.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-129.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 13:49:30.522: INFO: File wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-0b04e12e-03aa-4778-b376-6ec098de96b8 contains '' instead of '10.108.26.60'
Dec 31 13:49:30.535: INFO: File jessie_udp@dns-test-service-3.dns-129.svc.cluster.local from pod  dns-129/dns-test-0b04e12e-03aa-4778-b376-6ec098de96b8 contains '' instead of '10.108.26.60'
Dec 31 13:49:30.535: INFO: Lookups using dns-129/dns-test-0b04e12e-03aa-4778-b376-6ec098de96b8 failed for: [wheezy_udp@dns-test-service-3.dns-129.svc.cluster.local jessie_udp@dns-test-service-3.dns-129.svc.cluster.local]

Dec 31 13:49:35.555: INFO: DNS probes using dns-test-0b04e12e-03aa-4778-b376-6ec098de96b8 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:49:35.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-129" for this suite.
Dec 31 13:49:43.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:49:43.982: INFO: namespace dns-129 deletion completed in 8.265761668s

• [SLOW TEST:71.422 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:49:43.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-b489ae77-5ba7-485d-a3ed-633272739591
STEP: Creating configMap with name cm-test-opt-upd-747d9890-c08c-46d8-af39-fc84376ecdd9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b489ae77-5ba7-485d-a3ed-633272739591
STEP: Updating configmap cm-test-opt-upd-747d9890-c08c-46d8-af39-fc84376ecdd9
STEP: Creating configMap with name cm-test-opt-create-3413d718-053e-454b-ac7a-1115ba245f79
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:50:00.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8139" for this suite.
Dec 31 13:50:24.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:50:24.766: INFO: namespace configmap-8139 deletion completed in 24.204360098s

• [SLOW TEST:40.783 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:50:24.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 31 13:50:25.025: INFO: Waiting up to 5m0s for pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa" in namespace "downward-api-8484" to be "success or failure"
Dec 31 13:50:25.200: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa": Phase="Pending", Reason="", readiness=false. Elapsed: 175.282982ms
Dec 31 13:50:27.208: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183359189s
Dec 31 13:50:29.218: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19287569s
Dec 31 13:50:31.226: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.20160525s
Dec 31 13:50:33.234: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.2095659s
STEP: Saw pod success
Dec 31 13:50:33.235: INFO: Pod "downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa" satisfied condition "success or failure"
Dec 31 13:50:33.239: INFO: Trying to get logs from node iruya-node pod downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa container dapi-container: 
STEP: delete the pod
Dec 31 13:50:33.313: INFO: Waiting for pod downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa to disappear
Dec 31 13:50:33.321: INFO: Pod downward-api-541a4d00-aa84-4270-b93b-6a823c3600fa no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:50:33.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8484" for this suite.
Dec 31 13:50:39.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:50:39.587: INFO: namespace downward-api-8484 deletion completed in 6.258987269s

• [SLOW TEST:14.821 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:50:39.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 13:50:39.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187" in namespace "projected-411" to be "success or failure"
Dec 31 13:50:39.762: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Pending", Reason="", readiness=false. Elapsed: 23.543477ms
Dec 31 13:50:41.771: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032646571s
Dec 31 13:50:43.784: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045461873s
Dec 31 13:50:45.815: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076002349s
Dec 31 13:50:47.829: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08996766s
Dec 31 13:50:49.840: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101152931s
STEP: Saw pod success
Dec 31 13:50:49.840: INFO: Pod "downwardapi-volume-0630d055-4941-46e3-916a-641e71856187" satisfied condition "success or failure"
Dec 31 13:50:49.844: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0630d055-4941-46e3-916a-641e71856187 container client-container: 
STEP: delete the pod
Dec 31 13:50:50.006: INFO: Waiting for pod downwardapi-volume-0630d055-4941-46e3-916a-641e71856187 to disappear
Dec 31 13:50:50.018: INFO: Pod downwardapi-volume-0630d055-4941-46e3-916a-641e71856187 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:50:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-411" for this suite.
Dec 31 13:50:56.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:50:56.229: INFO: namespace projected-411 deletion completed in 6.204583412s

• [SLOW TEST:16.641 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:50:56.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 13:50:56.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a" in namespace "downward-api-4692" to be "success or failure"
Dec 31 13:50:56.428: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.414033ms
Dec 31 13:50:58.452: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036395294s
Dec 31 13:51:00.462: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046680445s
Dec 31 13:51:02.479: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063601868s
Dec 31 13:51:04.492: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076222823s
STEP: Saw pod success
Dec 31 13:51:04.492: INFO: Pod "downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a" satisfied condition "success or failure"
Dec 31 13:51:04.498: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a container client-container: 
STEP: delete the pod
Dec 31 13:51:04.736: INFO: Waiting for pod downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a to disappear
Dec 31 13:51:04.784: INFO: Pod downwardapi-volume-67c1f85a-88ef-4397-ad0c-e902c835bc0a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:51:04.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4692" for this suite.
Dec 31 13:51:10.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:51:11.047: INFO: namespace downward-api-4692 deletion completed in 6.248092783s

• [SLOW TEST:14.817 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:51:11.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-767/configmap-test-f625e866-ae6d-470a-ab40-1e06388cac55
STEP: Creating a pod to test consume configMaps
Dec 31 13:51:11.161: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727" in namespace "configmap-767" to be "success or failure"
Dec 31 13:51:11.167: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727": Phase="Pending", Reason="", readiness=false. Elapsed: 6.891984ms
Dec 31 13:51:13.180: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01991693s
Dec 31 13:51:15.232: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071519034s
Dec 31 13:51:17.239: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078586377s
Dec 31 13:51:19.248: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08791s
STEP: Saw pod success
Dec 31 13:51:19.249: INFO: Pod "pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727" satisfied condition "success or failure"
Dec 31 13:51:19.252: INFO: Trying to get logs from node iruya-node pod pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727 container env-test: 
STEP: delete the pod
Dec 31 13:51:19.359: INFO: Waiting for pod pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727 to disappear
Dec 31 13:51:19.365: INFO: Pod pod-configmaps-2e74713d-8b35-48c8-bf7b-ec5097350727 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:51:19.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-767" for this suite.
Dec 31 13:51:25.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:51:25.528: INFO: namespace configmap-767 deletion completed in 6.156192731s

• [SLOW TEST:14.481 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:51:25.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 13:51:25.656: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a" in namespace "projected-5774" to be "success or failure"
Dec 31 13:51:25.679: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.614236ms
Dec 31 13:51:27.689: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033160716s
Dec 31 13:51:29.696: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040014582s
Dec 31 13:51:31.706: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049464497s
Dec 31 13:51:33.750: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094021977s
Dec 31 13:51:35.763: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106670355s
STEP: Saw pod success
Dec 31 13:51:35.763: INFO: Pod "downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a" satisfied condition "success or failure"
Dec 31 13:51:35.768: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a container client-container: 
STEP: delete the pod
Dec 31 13:51:35.833: INFO: Waiting for pod downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a to disappear
Dec 31 13:51:35.840: INFO: Pod downwardapi-volume-1ed4917e-8cdf-4d28-8dee-f46cf079ff1a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:51:35.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5774" for this suite.
Dec 31 13:51:41.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:51:42.059: INFO: namespace projected-5774 deletion completed in 6.207706859s

• [SLOW TEST:16.530 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:51:42.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:51:48.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3431" for this suite.
Dec 31 13:51:54.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:51:54.632: INFO: namespace namespaces-3431 deletion completed in 6.226861815s
STEP: Destroying namespace "nsdeletetest-2286" for this suite.
Dec 31 13:51:54.635: INFO: Namespace nsdeletetest-2286 was already deleted
STEP: Destroying namespace "nsdeletetest-3950" for this suite.
Dec 31 13:52:00.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:52:00.844: INFO: namespace nsdeletetest-3950 deletion completed in 6.208908272s

• [SLOW TEST:18.785 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:52:00.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1737674f-1427-4575-9ef4-9f9f62882bd9
STEP: Creating a pod to test consume configMaps
Dec 31 13:52:01.638: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64" in namespace "projected-3034" to be "success or failure"
Dec 31 13:52:01.643: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Pending", Reason="", readiness=false. Elapsed: 5.096825ms
Dec 31 13:52:03.675: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036848576s
Dec 31 13:52:05.688: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049298875s
Dec 31 13:52:07.699: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061135536s
Dec 31 13:52:09.713: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074356514s
Dec 31 13:52:11.723: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084783977s
STEP: Saw pod success
Dec 31 13:52:11.723: INFO: Pod "pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64" satisfied condition "success or failure"
Dec 31 13:52:11.729: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 13:52:11.841: INFO: Waiting for pod pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64 to disappear
Dec 31 13:52:11.854: INFO: Pod pod-projected-configmaps-32b8665c-d0a0-4676-9518-1cd046f16d64 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:52:11.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3034" for this suite.
Dec 31 13:52:17.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:52:18.131: INFO: namespace projected-3034 deletion completed in 6.239339873s

• [SLOW TEST:17.286 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:52:18.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 31 13:52:18.205: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 31 13:52:18.215: INFO: Waiting for terminating namespaces to be deleted...
Dec 31 13:52:18.220: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 31 13:52:18.232: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.232: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 13:52:18.232: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 31 13:52:18.232: INFO: 	Container weave ready: true, restart count 0
Dec 31 13:52:18.232: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 13:52:18.232: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 31 13:52:18.244: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container coredns ready: true, restart count 0
Dec 31 13:52:18.245: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 31 13:52:18.245: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container weave ready: true, restart count 0
Dec 31 13:52:18.245: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 13:52:18.245: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container coredns ready: true, restart count 0
Dec 31 13:52:18.245: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container etcd ready: true, restart count 0
Dec 31 13:52:18.245: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 13:52:18.245: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 31 13:52:18.245: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 31 13:52:18.245: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e5790c5106dd97], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:52:19.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3312" for this suite.
Dec 31 13:52:25.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:52:25.453: INFO: namespace sched-pred-3312 deletion completed in 6.162770571s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.322 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:52:25.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Dec 31 13:52:25.609: INFO: Waiting up to 5m0s for pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d" in namespace "downward-api-5875" to be "success or failure"
Dec 31 13:52:25.629: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.191387ms
Dec 31 13:52:27.641: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031991734s
Dec 31 13:52:29.657: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047841053s
Dec 31 13:52:31.668: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059187594s
Dec 31 13:52:33.680: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070653148s
STEP: Saw pod success
Dec 31 13:52:33.680: INFO: Pod "downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d" satisfied condition "success or failure"
Dec 31 13:52:33.685: INFO: Trying to get logs from node iruya-node pod downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d container dapi-container: 
STEP: delete the pod
Dec 31 13:52:33.797: INFO: Waiting for pod downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d to disappear
Dec 31 13:52:33.830: INFO: Pod downward-api-921fddb1-30ac-42fd-9b0c-746432e5ad3d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:52:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5875" for this suite.
Dec 31 13:52:39.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:52:40.013: INFO: namespace downward-api-5875 deletion completed in 6.17225976s

• [SLOW TEST:14.559 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:52:40.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 31 13:52:40.197: INFO: Waiting up to 5m0s for pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700" in namespace "emptydir-4352" to be "success or failure"
Dec 31 13:52:40.209: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Pending", Reason="", readiness=false. Elapsed: 11.92121ms
Dec 31 13:52:42.222: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024378287s
Dec 31 13:52:44.229: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031789747s
Dec 31 13:52:46.240: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04316072s
Dec 31 13:52:48.250: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052998154s
Dec 31 13:52:50.261: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06357416s
STEP: Saw pod success
Dec 31 13:52:50.261: INFO: Pod "pod-96be9177-3c7f-4ab0-9a3a-ec78be081700" satisfied condition "success or failure"
Dec 31 13:52:50.268: INFO: Trying to get logs from node iruya-node pod pod-96be9177-3c7f-4ab0-9a3a-ec78be081700 container test-container: 
STEP: delete the pod
Dec 31 13:52:50.405: INFO: Waiting for pod pod-96be9177-3c7f-4ab0-9a3a-ec78be081700 to disappear
Dec 31 13:52:50.415: INFO: Pod pod-96be9177-3c7f-4ab0-9a3a-ec78be081700 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:52:50.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4352" for this suite.
Dec 31 13:52:56.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:52:56.606: INFO: namespace emptydir-4352 deletion completed in 6.161363245s

• [SLOW TEST:16.593 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:52:56.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-d8bd8a5b-163b-4da8-9e63-3d68530d389b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:53:06.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8582" for this suite.
Dec 31 13:53:28.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:53:28.960: INFO: namespace configmap-8582 deletion completed in 22.167074261s

• [SLOW TEST:32.354 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:53:28.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-02fae388-62d1-41ab-b968-c34adc9c0798
STEP: Creating a pod to test consume secrets
Dec 31 13:53:29.124: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d" in namespace "projected-9672" to be "success or failure"
Dec 31 13:53:29.138: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.797466ms
Dec 31 13:53:31.150: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025563278s
Dec 31 13:53:33.157: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033301788s
Dec 31 13:53:35.166: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042117977s
Dec 31 13:53:37.173: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049464676s
Dec 31 13:53:39.182: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058004833s
STEP: Saw pod success
Dec 31 13:53:39.182: INFO: Pod "pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d" satisfied condition "success or failure"
Dec 31 13:53:39.186: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d container secret-volume-test: 
STEP: delete the pod
Dec 31 13:53:39.334: INFO: Waiting for pod pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d to disappear
Dec 31 13:53:39.347: INFO: Pod pod-projected-secrets-9d93c0f5-6260-4fc1-be00-4f5231a2de5d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:53:39.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9672" for this suite.
Dec 31 13:53:45.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:53:45.562: INFO: namespace projected-9672 deletion completed in 6.206735207s

• [SLOW TEST:16.601 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:53:45.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 31 13:53:54.425: INFO: Successfully updated pod "labelsupdate310f7858-c3f3-4aa6-a1f7-1f07a89712db"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:53:56.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9883" for this suite.
Dec 31 13:54:18.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:54:18.834: INFO: namespace downward-api-9883 deletion completed in 22.184496449s

• [SLOW TEST:33.272 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:54:18.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:54:19.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1188" for this suite.
Dec 31 13:54:25.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:54:25.516: INFO: namespace kubelet-test-1188 deletion completed in 6.256646562s

• [SLOW TEST:6.682 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:54:25.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-8ec0dcca-883d-4a72-a24d-8d1e7e8a3910
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-8ec0dcca-883d-4a72-a24d-8d1e7e8a3910
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:55:45.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8749" for this suite.
Dec 31 13:56:07.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:56:07.681: INFO: namespace configmap-8749 deletion completed in 22.270281655s

• [SLOW TEST:102.165 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:56:07.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-5199934f-292f-46c2-90de-7a8a946374d0
STEP: Creating secret with name s-test-opt-upd-85d3b7ef-026e-4167-9d9a-1e1685adbabc
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5199934f-292f-46c2-90de-7a8a946374d0
STEP: Updating secret s-test-opt-upd-85d3b7ef-026e-4167-9d9a-1e1685adbabc
STEP: Creating secret with name s-test-opt-create-a13e68ae-9894-441d-a66e-12e1e347e756
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:57:33.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7980" for this suite.
Dec 31 13:57:55.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:57:55.688: INFO: namespace secrets-7980 deletion completed in 22.097224439s

• [SLOW TEST:108.006 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:57:55.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 31 13:57:55.865: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776570,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:57:55.866: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776570,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 31 13:58:05.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776584,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 31 13:58:05.889: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776584,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 31 13:58:15.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776599,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 13:58:15.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776599,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 31 13:58:25.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776613,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 13:58:25.923: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-a,UID:7573cd05-629d-4945-b057-fbf0e03f5798,ResourceVersion:18776613,Generation:0,CreationTimestamp:2019-12-31 13:57:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 31 13:58:35.938: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-b,UID:725d5d6a-5069-49a9-bcc4-2df0cf687fc8,ResourceVersion:18776627,Generation:0,CreationTimestamp:2019-12-31 13:58:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:58:35.938: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-b,UID:725d5d6a-5069-49a9-bcc4-2df0cf687fc8,ResourceVersion:18776627,Generation:0,CreationTimestamp:2019-12-31 13:58:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 31 13:58:45.961: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-b,UID:725d5d6a-5069-49a9-bcc4-2df0cf687fc8,ResourceVersion:18776642,Generation:0,CreationTimestamp:2019-12-31 13:58:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 31 13:58:45.961: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-3971,SelfLink:/api/v1/namespaces/watch-3971/configmaps/e2e-watch-test-configmap-b,UID:725d5d6a-5069-49a9-bcc4-2df0cf687fc8,ResourceVersion:18776642,Generation:0,CreationTimestamp:2019-12-31 13:58:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:58:55.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3971" for this suite.
Dec 31 13:59:02.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:59:02.223: INFO: namespace watch-3971 deletion completed in 6.251208948s

• [SLOW TEST:66.535 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:59:02.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-vbgv
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 13:59:02.463: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vbgv" in namespace "subpath-7297" to be "success or failure"
Dec 31 13:59:02.468: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.416587ms
Dec 31 13:59:04.479: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01531336s
Dec 31 13:59:06.708: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244826035s
Dec 31 13:59:08.726: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262806848s
Dec 31 13:59:10.738: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.274079239s
Dec 31 13:59:12.746: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 10.282388154s
Dec 31 13:59:14.753: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 12.28922216s
Dec 31 13:59:16.762: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 14.298083107s
Dec 31 13:59:18.775: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 16.311184693s
Dec 31 13:59:20.785: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 18.32114034s
Dec 31 13:59:22.793: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 20.329220774s
Dec 31 13:59:24.801: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 22.337123969s
Dec 31 13:59:26.812: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 24.348787463s
Dec 31 13:59:28.829: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 26.36543786s
Dec 31 13:59:30.836: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Running", Reason="", readiness=true. Elapsed: 28.372656192s
Dec 31 13:59:32.846: INFO: Pod "pod-subpath-test-configmap-vbgv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.382374596s
STEP: Saw pod success
Dec 31 13:59:32.846: INFO: Pod "pod-subpath-test-configmap-vbgv" satisfied condition "success or failure"
Dec 31 13:59:32.851: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-vbgv container test-container-subpath-configmap-vbgv: 
STEP: delete the pod
Dec 31 13:59:32.960: INFO: Waiting for pod pod-subpath-test-configmap-vbgv to disappear
Dec 31 13:59:32.968: INFO: Pod pod-subpath-test-configmap-vbgv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vbgv
Dec 31 13:59:32.968: INFO: Deleting pod "pod-subpath-test-configmap-vbgv" in namespace "subpath-7297"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:59:32.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7297" for this suite.
Dec 31 13:59:38.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:59:39.201: INFO: namespace subpath-7297 deletion completed in 6.225348133s

• [SLOW TEST:36.977 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:59:39.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 13:59:44.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8668" for this suite.
Dec 31 13:59:50.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 13:59:51.042: INFO: namespace watch-8668 deletion completed in 6.179159936s

• [SLOW TEST:11.841 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 13:59:51.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 13:59:51.187: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a" in namespace "downward-api-1933" to be "success or failure"
Dec 31 13:59:51.205: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.136501ms
Dec 31 13:59:53.215: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027887015s
Dec 31 13:59:55.224: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036667157s
Dec 31 13:59:57.236: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048794194s
Dec 31 13:59:59.251: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063137206s
Dec 31 14:00:01.262: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074726449s
STEP: Saw pod success
Dec 31 14:00:01.262: INFO: Pod "downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a" satisfied condition "success or failure"
Dec 31 14:00:01.266: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a container client-container: 
STEP: delete the pod
Dec 31 14:00:01.460: INFO: Waiting for pod downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a to disappear
Dec 31 14:00:01.505: INFO: Pod downwardapi-volume-b25a9801-32b7-4758-9f00-92bb7662911a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:00:01.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1933" for this suite.
Dec 31 14:00:07.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:00:07.910: INFO: namespace downward-api-1933 deletion completed in 6.393432438s

• [SLOW TEST:16.868 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:00:07.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:00:16.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4674" for this suite.
Dec 31 14:01:08.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:01:08.309: INFO: namespace kubelet-test-4674 deletion completed in 52.154629158s

• [SLOW TEST:60.398 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:01:08.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6908.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6908.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6908.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.149_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6908.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6908.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6908.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6908.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 149.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.149_udp@PTR;check="$$(dig +tcp +noall +answer +search 149.216.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.216.149_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 14:01:20.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.692: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.698: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.703: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.708: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.712: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.717: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.725: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.731: INFO: Unable to read 10.102.216.149_udp@PTR from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.744: INFO: Unable to read 10.102.216.149_tcp@PTR from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.752: INFO: Unable to read jessie_udp@dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.776: INFO: Unable to read jessie_tcp@dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.797: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.802: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.813: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.821: INFO: Unable to read jessie_udp@PodARecord from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.825: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.831: INFO: Unable to read 10.102.216.149_udp@PTR from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.835: INFO: Unable to read 10.102.216.149_tcp@PTR from pod dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936: the server could not find the requested resource (get pods dns-test-cff88364-8c38-4f6f-889e-d098af134936)
Dec 31 14:01:20.835: INFO: Lookups using dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936 failed for: [wheezy_udp@dns-test-service.dns-6908.svc.cluster.local wheezy_tcp@dns-test-service.dns-6908.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.102.216.149_udp@PTR 10.102.216.149_tcp@PTR jessie_udp@dns-test-service.dns-6908.svc.cluster.local jessie_tcp@dns-test-service.dns-6908.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6908.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6908.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6908.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.102.216.149_udp@PTR 10.102.216.149_tcp@PTR]

Dec 31 14:01:25.943: INFO: DNS probes using dns-6908/dns-test-cff88364-8c38-4f6f-889e-d098af134936 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:01:26.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6908" for this suite.
Dec 31 14:01:32.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:01:32.498: INFO: namespace dns-6908 deletion completed in 6.195256252s

• [SLOW TEST:24.188 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:01:32.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-0aca4cd3-cd6a-4c7d-94f8-5b0f20f92cbe
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:01:33.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5745" for this suite.
Dec 31 14:01:39.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:01:39.235: INFO: namespace secrets-5745 deletion completed in 6.20031484s

• [SLOW TEST:6.737 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:01:39.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 31 14:01:39.388: INFO: Waiting up to 5m0s for pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36" in namespace "emptydir-9229" to be "success or failure"
Dec 31 14:01:39.415: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Pending", Reason="", readiness=false. Elapsed: 27.155108ms
Dec 31 14:01:41.453: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065181431s
Dec 31 14:01:43.462: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074807938s
Dec 31 14:01:45.472: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084118856s
Dec 31 14:01:47.496: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108137159s
Dec 31 14:01:49.504: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116177349s
STEP: Saw pod success
Dec 31 14:01:49.504: INFO: Pod "pod-db7f3517-ff4d-4782-ab1d-729d39b30c36" satisfied condition "success or failure"
Dec 31 14:01:49.509: INFO: Trying to get logs from node iruya-node pod pod-db7f3517-ff4d-4782-ab1d-729d39b30c36 container test-container: 
STEP: delete the pod
Dec 31 14:01:49.661: INFO: Waiting for pod pod-db7f3517-ff4d-4782-ab1d-729d39b30c36 to disappear
Dec 31 14:01:49.671: INFO: Pod pod-db7f3517-ff4d-4782-ab1d-729d39b30c36 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:01:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9229" for this suite.
Dec 31 14:01:55.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:01:55.900: INFO: namespace emptydir-9229 deletion completed in 6.198956151s

• [SLOW TEST:16.662 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:01:55.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-957
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-957
STEP: Deleting pre-stop pod
Dec 31 14:02:17.088: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:02:17.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-957" for this suite.
Dec 31 14:02:55.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:02:55.385: INFO: namespace prestop-957 deletion completed in 38.2734505s

• [SLOW TEST:59.486 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:02:55.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-8bb835f3-d5db-41a4-a8fd-033e1dc82c8d
STEP: Creating a pod to test consume secrets
Dec 31 14:02:55.463: INFO: Waiting up to 5m0s for pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371" in namespace "secrets-5520" to be "success or failure"
Dec 31 14:02:55.545: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Pending", Reason="", readiness=false. Elapsed: 82.342114ms
Dec 31 14:02:57.554: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090906528s
Dec 31 14:02:59.574: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111577383s
Dec 31 14:03:01.583: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120655718s
Dec 31 14:03:03.594: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130977544s
Dec 31 14:03:05.603: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.140354238s
STEP: Saw pod success
Dec 31 14:03:05.603: INFO: Pod "pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371" satisfied condition "success or failure"
Dec 31 14:03:05.606: INFO: Trying to get logs from node iruya-node pod pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371 container secret-volume-test: 
STEP: delete the pod
Dec 31 14:03:05.774: INFO: Waiting for pod pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371 to disappear
Dec 31 14:03:05.788: INFO: Pod pod-secrets-d51c75c3-47bc-4392-826d-a9564caf3371 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:03:05.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5520" for this suite.
Dec 31 14:03:11.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:03:11.982: INFO: namespace secrets-5520 deletion completed in 6.18670735s

• [SLOW TEST:16.597 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:03:11.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1003
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1003 to expose endpoints map[]
Dec 31 14:03:12.294: INFO: successfully validated that service multi-endpoint-test in namespace services-1003 exposes endpoints map[] (116.265522ms elapsed)
STEP: Creating pod pod1 in namespace services-1003
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1003 to expose endpoints map[pod1:[100]]
Dec 31 14:03:16.452: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.1067586s elapsed, will retry)
Dec 31 14:03:19.492: INFO: successfully validated that service multi-endpoint-test in namespace services-1003 exposes endpoints map[pod1:[100]] (7.146310395s elapsed)
STEP: Creating pod pod2 in namespace services-1003
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1003 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 31 14:03:23.926: INFO: Unexpected endpoints: found map[630481d6-13e9-478d-9385-e8215f107615:[100]], expected map[pod1:[100] pod2:[101]] (4.419534297s elapsed, will retry)
Dec 31 14:03:27.607: INFO: successfully validated that service multi-endpoint-test in namespace services-1003 exposes endpoints map[pod1:[100] pod2:[101]] (8.100606363s elapsed)
STEP: Deleting pod pod1 in namespace services-1003
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1003 to expose endpoints map[pod2:[101]]
Dec 31 14:03:28.691: INFO: successfully validated that service multi-endpoint-test in namespace services-1003 exposes endpoints map[pod2:[101]] (1.063691386s elapsed)
STEP: Deleting pod pod2 in namespace services-1003
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1003 to expose endpoints map[]
Dec 31 14:03:29.758: INFO: successfully validated that service multi-endpoint-test in namespace services-1003 exposes endpoints map[] (1.054817401s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:03:30.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1003" for this suite.
Dec 31 14:03:53.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:03:53.189: INFO: namespace services-1003 deletion completed in 22.28789236s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.205 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:03:53.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:03:53.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3831" for this suite.
Dec 31 14:04:15.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:04:15.542: INFO: namespace pods-3831 deletion completed in 22.123921799s

• [SLOW TEST:22.352 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:04:15.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 31 14:04:25.803: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d52d3c63-a71c-4272-9191-401bffbcbf99,GenerateName:,Namespace:events-2369,SelfLink:/api/v1/namespaces/events-2369/pods/send-events-d52d3c63-a71c-4272-9191-401bffbcbf99,UID:5cf30094-9721-46f4-a9ba-d4608e5a8611,ResourceVersion:18777526,Generation:0,CreationTimestamp:2019-12-31 14:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 688423572,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4rpnn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4rpnn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4rpnn true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f0ebc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f0ec00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:04:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:04:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:04:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:04:15 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-31 14:04:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-12-31 14:04:22 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://be21b04e1a27f77e9e91d93c7ffc1dd139bc55901bbe6bfe3b999b13cef41f52}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Dec 31 14:04:27.812: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 31 14:04:29.829: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:04:29.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2369" for this suite.
Dec 31 14:05:07.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:05:08.064: INFO: namespace events-2369 deletion completed in 38.15650826s

• [SLOW TEST:52.521 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:05:08.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 14:05:08.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f" in namespace "projected-6805" to be "success or failure"
Dec 31 14:05:08.181: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.748087ms
Dec 31 14:05:10.190: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049764107s
Dec 31 14:05:12.206: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066251084s
Dec 31 14:05:14.225: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084792376s
Dec 31 14:05:16.247: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107291683s
STEP: Saw pod success
Dec 31 14:05:16.248: INFO: Pod "downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f" satisfied condition "success or failure"
Dec 31 14:05:16.259: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f container client-container: 
STEP: delete the pod
Dec 31 14:05:16.400: INFO: Waiting for pod downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f to disappear
Dec 31 14:05:16.412: INFO: Pod downwardapi-volume-6c957260-375b-436d-a06b-180986dcff9f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:05:16.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6805" for this suite.
Dec 31 14:05:22.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:05:22.664: INFO: namespace projected-6805 deletion completed in 6.243715353s

• [SLOW TEST:14.599 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:05:22.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 14:05:22.807: INFO: Number of nodes with available pods: 0
Dec 31 14:05:22.807: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:23.838: INFO: Number of nodes with available pods: 0
Dec 31 14:05:23.838: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:25.055: INFO: Number of nodes with available pods: 0
Dec 31 14:05:25.055: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:25.835: INFO: Number of nodes with available pods: 0
Dec 31 14:05:25.835: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:26.854: INFO: Number of nodes with available pods: 0
Dec 31 14:05:26.855: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:29.380: INFO: Number of nodes with available pods: 0
Dec 31 14:05:29.380: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:29.929: INFO: Number of nodes with available pods: 0
Dec 31 14:05:29.929: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:30.824: INFO: Number of nodes with available pods: 0
Dec 31 14:05:30.824: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:31.828: INFO: Number of nodes with available pods: 0
Dec 31 14:05:31.828: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:32.829: INFO: Number of nodes with available pods: 0
Dec 31 14:05:32.829: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:33.832: INFO: Number of nodes with available pods: 2
Dec 31 14:05:33.832: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 31 14:05:33.885: INFO: Number of nodes with available pods: 2
Dec 31 14:05:33.885: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5038, will wait for the garbage collector to delete the pods
Dec 31 14:05:35.203: INFO: Deleting DaemonSet.extensions daemon-set took: 18.335295ms
Dec 31 14:05:35.703: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.449441ms
Dec 31 14:05:44.142: INFO: Number of nodes with available pods: 0
Dec 31 14:05:44.142: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 14:05:44.147: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5038/daemonsets","resourceVersion":"18777722"},"items":null}

Dec 31 14:05:44.149: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5038/pods","resourceVersion":"18777722"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:05:44.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5038" for this suite.
Dec 31 14:05:50.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:05:50.343: INFO: namespace daemonsets-5038 deletion completed in 6.181097326s

• [SLOW TEST:27.679 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:05:50.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 14:05:50.527: INFO: Number of nodes with available pods: 0
Dec 31 14:05:50.527: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:52.450: INFO: Number of nodes with available pods: 0
Dec 31 14:05:52.450: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:52.546: INFO: Number of nodes with available pods: 0
Dec 31 14:05:52.546: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:53.569: INFO: Number of nodes with available pods: 0
Dec 31 14:05:53.569: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:54.546: INFO: Number of nodes with available pods: 0
Dec 31 14:05:54.546: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:56.825: INFO: Number of nodes with available pods: 0
Dec 31 14:05:56.825: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:58.313: INFO: Number of nodes with available pods: 0
Dec 31 14:05:58.313: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:58.561: INFO: Number of nodes with available pods: 0
Dec 31 14:05:58.561: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:05:59.547: INFO: Number of nodes with available pods: 0
Dec 31 14:05:59.547: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:06:00.548: INFO: Number of nodes with available pods: 2
Dec 31 14:06:00.548: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 31 14:06:00.607: INFO: Number of nodes with available pods: 1
Dec 31 14:06:00.607: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:01.624: INFO: Number of nodes with available pods: 1
Dec 31 14:06:01.624: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:02.623: INFO: Number of nodes with available pods: 1
Dec 31 14:06:02.624: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:03.638: INFO: Number of nodes with available pods: 1
Dec 31 14:06:03.638: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:04.688: INFO: Number of nodes with available pods: 1
Dec 31 14:06:04.688: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:05.627: INFO: Number of nodes with available pods: 1
Dec 31 14:06:05.627: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:06.644: INFO: Number of nodes with available pods: 1
Dec 31 14:06:06.644: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:07.647: INFO: Number of nodes with available pods: 1
Dec 31 14:06:07.647: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:08.653: INFO: Number of nodes with available pods: 1
Dec 31 14:06:08.653: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:09.677: INFO: Number of nodes with available pods: 1
Dec 31 14:06:09.677: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:10.645: INFO: Number of nodes with available pods: 1
Dec 31 14:06:10.645: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:11.629: INFO: Number of nodes with available pods: 1
Dec 31 14:06:11.629: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:12.622: INFO: Number of nodes with available pods: 1
Dec 31 14:06:12.622: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:13.629: INFO: Number of nodes with available pods: 1
Dec 31 14:06:13.629: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:14.649: INFO: Number of nodes with available pods: 1
Dec 31 14:06:14.649: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:15.626: INFO: Number of nodes with available pods: 1
Dec 31 14:06:15.626: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:16.662: INFO: Number of nodes with available pods: 1
Dec 31 14:06:16.662: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:17.626: INFO: Number of nodes with available pods: 1
Dec 31 14:06:17.626: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:18.629: INFO: Number of nodes with available pods: 1
Dec 31 14:06:18.629: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:19.952: INFO: Number of nodes with available pods: 1
Dec 31 14:06:19.952: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:20.635: INFO: Number of nodes with available pods: 1
Dec 31 14:06:20.635: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:21.626: INFO: Number of nodes with available pods: 1
Dec 31 14:06:21.626: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:22.631: INFO: Number of nodes with available pods: 1
Dec 31 14:06:22.631: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:23.653: INFO: Number of nodes with available pods: 1
Dec 31 14:06:23.654: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Dec 31 14:06:24.650: INFO: Number of nodes with available pods: 2
Dec 31 14:06:24.650: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7600, will wait for the garbage collector to delete the pods
Dec 31 14:06:24.718: INFO: Deleting DaemonSet.extensions daemon-set took: 9.955081ms
Dec 31 14:06:25.018: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.328186ms
Dec 31 14:06:31.925: INFO: Number of nodes with available pods: 0
Dec 31 14:06:31.925: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 14:06:31.928: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7600/daemonsets","resourceVersion":"18777862"},"items":null}

Dec 31 14:06:31.931: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7600/pods","resourceVersion":"18777862"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:06:31.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7600" for this suite.
Dec 31 14:06:37.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:06:38.072: INFO: namespace daemonsets-7600 deletion completed in 6.127083153s

• [SLOW TEST:47.728 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:06:38.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 31 14:06:38.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2021,SelfLink:/api/v1/namespaces/watch-2021/configmaps/e2e-watch-test-resource-version,UID:0968c4db-9673-4663-aaea-c53735e3323d,ResourceVersion:18777899,Generation:0,CreationTimestamp:2019-12-31 14:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 31 14:06:38.299: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-2021,SelfLink:/api/v1/namespaces/watch-2021/configmaps/e2e-watch-test-resource-version,UID:0968c4db-9673-4663-aaea-c53735e3323d,ResourceVersion:18777900,Generation:0,CreationTimestamp:2019-12-31 14:06:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:06:38.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2021" for this suite.
Dec 31 14:06:44.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:06:44.548: INFO: namespace watch-2021 deletion completed in 6.233672484s

• [SLOW TEST:6.476 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:06:44.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-7385/secret-test-22b90c3f-9b4e-4590-8f0f-b6eb63388b43
STEP: Creating a pod to test consume secrets
Dec 31 14:06:44.818: INFO: Waiting up to 5m0s for pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182" in namespace "secrets-7385" to be "success or failure"
Dec 31 14:06:44.910: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182": Phase="Pending", Reason="", readiness=false. Elapsed: 91.726867ms
Dec 31 14:06:46.921: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102539223s
Dec 31 14:06:48.929: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110676488s
Dec 31 14:06:50.938: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120003981s
Dec 31 14:06:52.945: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.126965885s
STEP: Saw pod success
Dec 31 14:06:52.945: INFO: Pod "pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182" satisfied condition "success or failure"
Dec 31 14:06:52.950: INFO: Trying to get logs from node iruya-node pod pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182 container env-test: 
STEP: delete the pod
Dec 31 14:06:53.023: INFO: Waiting for pod pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182 to disappear
Dec 31 14:06:53.028: INFO: Pod pod-configmaps-37f50542-6be5-4d10-93b2-b01d9680f182 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:06:53.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7385" for this suite.
Dec 31 14:06:59.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:06:59.203: INFO: namespace secrets-7385 deletion completed in 6.170248211s

• [SLOW TEST:14.654 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:06:59.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 14:06:59.350: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718" in namespace "downward-api-2199" to be "success or failure"
Dec 31 14:06:59.380: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718": Phase="Pending", Reason="", readiness=false. Elapsed: 30.062172ms
Dec 31 14:07:01.392: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041984033s
Dec 31 14:07:03.406: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055185415s
Dec 31 14:07:05.655: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304549673s
Dec 31 14:07:07.663: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.312729663s
STEP: Saw pod success
Dec 31 14:07:07.663: INFO: Pod "downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718" satisfied condition "success or failure"
Dec 31 14:07:07.668: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718 container client-container: 
STEP: delete the pod
Dec 31 14:07:07.788: INFO: Waiting for pod downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718 to disappear
Dec 31 14:07:07.795: INFO: Pod downwardapi-volume-21924871-661b-444b-85c5-eaf3d7845718 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:07:07.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2199" for this suite.
Dec 31 14:07:13.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:07:14.014: INFO: namespace downward-api-2199 deletion completed in 6.21276513s

• [SLOW TEST:14.811 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:07:14.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3d7e96a8-4a1f-45f4-be68-063e712ccb49
STEP: Creating a pod to test consume configMaps
Dec 31 14:07:14.230: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1" in namespace "projected-7312" to be "success or failure"
Dec 31 14:07:14.244: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.833784ms
Dec 31 14:07:16.263: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032934289s
Dec 31 14:07:18.270: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04042487s
Dec 31 14:07:20.283: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053301081s
Dec 31 14:07:22.292: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Running", Reason="", readiness=true. Elapsed: 8.061619958s
Dec 31 14:07:24.301: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070761824s
STEP: Saw pod success
Dec 31 14:07:24.301: INFO: Pod "pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1" satisfied condition "success or failure"
Dec 31 14:07:24.309: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 14:07:24.433: INFO: Waiting for pod pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1 to disappear
Dec 31 14:07:24.486: INFO: Pod pod-projected-configmaps-da466170-9144-482f-8325-2cf81d544bf1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:07:24.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7312" for this suite.
Dec 31 14:07:30.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:07:30.826: INFO: namespace projected-7312 deletion completed in 6.333627491s

• [SLOW TEST:16.812 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:07:30.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 31 14:07:39.789: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7800 pod-service-account-11322c0c-31ce-4b9c-bd55-f18af4d92095 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 31 14:07:42.730: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7800 pod-service-account-11322c0c-31ce-4b9c-bd55-f18af4d92095 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 31 14:07:43.320: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7800 pod-service-account-11322c0c-31ce-4b9c-bd55-f18af4d92095 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:07:44.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7800" for this suite.
Dec 31 14:07:50.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:07:50.377: INFO: namespace svcaccounts-7800 deletion completed in 6.242134728s

• [SLOW TEST:19.551 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:07:50.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2c6aa447-f94a-446d-8f3d-05a4dc776ab1
STEP: Creating a pod to test consume configMaps
Dec 31 14:07:50.557: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515" in namespace "projected-7593" to be "success or failure"
Dec 31 14:07:50.651: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Pending", Reason="", readiness=false. Elapsed: 92.99444ms
Dec 31 14:07:52.658: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100574866s
Dec 31 14:07:54.665: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107901089s
Dec 31 14:07:56.674: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116260074s
Dec 31 14:07:58.679: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121397614s
Dec 31 14:08:00.691: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.133022872s
STEP: Saw pod success
Dec 31 14:08:00.691: INFO: Pod "pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515" satisfied condition "success or failure"
Dec 31 14:08:00.696: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 14:08:00.873: INFO: Waiting for pod pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515 to disappear
Dec 31 14:08:00.883: INFO: Pod pod-projected-configmaps-a7de6ea6-4042-492b-9127-9c752d68b515 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:08:00.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7593" for this suite.
Dec 31 14:08:06.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:08:07.045: INFO: namespace projected-7593 deletion completed in 6.153887282s

• [SLOW TEST:16.667 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:08:07.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-a7ca2641-2828-433a-91d5-9464bc03a045
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-a7ca2641-2828-433a-91d5-9464bc03a045
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:09:33.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2196" for this suite.
Dec 31 14:09:56.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:09:56.140: INFO: namespace projected-2196 deletion completed in 22.149300737s

• [SLOW TEST:109.094 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:09:56.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-bfc04585-ce87-49ad-a88a-fcfc1693b1e5
STEP: Creating secret with name s-test-opt-upd-285b70dd-8f8d-4b43-919e-794b3853efe5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-bfc04585-ce87-49ad-a88a-fcfc1693b1e5
STEP: Updating secret s-test-opt-upd-285b70dd-8f8d-4b43-919e-794b3853efe5
STEP: Creating secret with name s-test-opt-create-3b218ad1-82cd-4968-bb09-f006f3621757
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:11:35.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6805" for this suite.
Dec 31 14:11:59.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:11:59.278: INFO: namespace projected-6805 deletion completed in 24.262036377s

• [SLOW TEST:123.137 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:11:59.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-ef8aad3a-7790-427c-bd37-6c3ee17f81ad
STEP: Creating a pod to test consume configMaps
Dec 31 14:11:59.358: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba" in namespace "configmap-6368" to be "success or failure"
Dec 31 14:11:59.363: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261456ms
Dec 31 14:12:01.757: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398717991s
Dec 31 14:12:03.768: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409633802s
Dec 31 14:12:05.779: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.421041405s
Dec 31 14:12:07.793: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.434784723s
STEP: Saw pod success
Dec 31 14:12:07.793: INFO: Pod "pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba" satisfied condition "success or failure"
Dec 31 14:12:07.797: INFO: Trying to get logs from node iruya-node pod pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba container configmap-volume-test: 
STEP: delete the pod
Dec 31 14:12:07.873: INFO: Waiting for pod pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba to disappear
Dec 31 14:12:07.884: INFO: Pod pod-configmaps-cd4361aa-cbb3-479b-a114-8ce30921e8ba no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:12:07.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6368" for this suite.
Dec 31 14:12:13.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:12:14.099: INFO: namespace configmap-6368 deletion completed in 6.145358095s

• [SLOW TEST:14.820 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:12:14.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:12:14.361: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 31 14:12:14.424: INFO: Number of nodes with available pods: 0
Dec 31 14:12:14.425: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:15.442: INFO: Number of nodes with available pods: 0
Dec 31 14:12:15.442: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:16.443: INFO: Number of nodes with available pods: 0
Dec 31 14:12:16.443: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:17.442: INFO: Number of nodes with available pods: 0
Dec 31 14:12:17.442: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:18.457: INFO: Number of nodes with available pods: 0
Dec 31 14:12:18.457: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:20.897: INFO: Number of nodes with available pods: 0
Dec 31 14:12:20.897: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:21.475: INFO: Number of nodes with available pods: 0
Dec 31 14:12:21.475: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:22.446: INFO: Number of nodes with available pods: 0
Dec 31 14:12:22.446: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:23.438: INFO: Number of nodes with available pods: 0
Dec 31 14:12:23.438: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:24.449: INFO: Number of nodes with available pods: 0
Dec 31 14:12:24.449: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:25.450: INFO: Number of nodes with available pods: 2
Dec 31 14:12:25.450: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 31 14:12:25.497: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:25.497: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:26.549: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:26.549: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:27.614: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:27.614: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:28.551: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:28.551: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:29.543: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:29.543: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:30.550: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:30.550: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:31.541: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:31.541: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:31.541: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:32.545: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:32.545: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:32.545: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:33.541: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:33.541: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:33.541: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:34.543: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:34.543: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:34.543: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:35.542: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:35.542: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:35.542: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:36.547: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:36.547: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:36.547: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:37.541: INFO: Wrong image for pod: daemon-set-25xtx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:37.541: INFO: Pod daemon-set-25xtx is not available
Dec 31 14:12:37.541: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:38.547: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:38.547: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:39.541: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:39.541: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:40.545: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:40.545: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:41.590: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:41.590: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:42.657: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:42.657: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:43.669: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:43.669: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:44.546: INFO: Pod daemon-set-2qlfx is not available
Dec 31 14:12:44.547: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:45.542: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:46.564: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:47.553: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:48.546: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:49.546: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:49.546: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:50.543: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:50.543: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:51.618: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:51.618: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:52.551: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:52.551: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:53.547: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:53.547: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:54.546: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:54.546: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:55.548: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:55.548: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:56.571: INFO: Wrong image for pod: daemon-set-87d9w. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Dec 31 14:12:56.571: INFO: Pod daemon-set-87d9w is not available
Dec 31 14:12:57.542: INFO: Pod daemon-set-rvjs7 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 31 14:12:57.564: INFO: Number of nodes with available pods: 1
Dec 31 14:12:57.564: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:58.591: INFO: Number of nodes with available pods: 1
Dec 31 14:12:58.591: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:12:59.590: INFO: Number of nodes with available pods: 1
Dec 31 14:12:59.590: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:13:00.602: INFO: Number of nodes with available pods: 1
Dec 31 14:13:00.602: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:13:01.617: INFO: Number of nodes with available pods: 1
Dec 31 14:13:01.617: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:13:02.597: INFO: Number of nodes with available pods: 1
Dec 31 14:13:02.598: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:13:03.581: INFO: Number of nodes with available pods: 1
Dec 31 14:13:03.581: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:13:04.603: INFO: Number of nodes with available pods: 2
Dec 31 14:13:04.603: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4905, will wait for the garbage collector to delete the pods
Dec 31 14:13:04.725: INFO: Deleting DaemonSet.extensions daemon-set took: 49.462244ms
Dec 31 14:13:05.126: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.561922ms
Dec 31 14:13:12.744: INFO: Number of nodes with available pods: 0
Dec 31 14:13:12.744: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 14:13:12.747: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4905/daemonsets","resourceVersion":"18778727"},"items":null}

Dec 31 14:13:12.750: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4905/pods","resourceVersion":"18778727"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:13:12.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4905" for this suite.
Dec 31 14:13:18.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:13:18.892: INFO: namespace daemonsets-4905 deletion completed in 6.123746867s

• [SLOW TEST:64.793 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:13:18.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:13:27.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1452" for this suite.
Dec 31 14:14:19.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:14:19.400: INFO: namespace kubelet-test-1452 deletion completed in 52.194415697s

• [SLOW TEST:60.508 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:14:19.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:14:47.620: INFO: Container started at 2019-12-31 14:14:26 +0000 UTC, pod became ready at 2019-12-31 14:14:45 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:14:47.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5580" for this suite.
Dec 31 14:15:09.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:15:09.833: INFO: namespace container-probe-5580 deletion completed in 22.203294779s

• [SLOW TEST:50.432 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:15:09.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:15:09.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2289" for this suite.
Dec 31 14:15:16.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:15:16.120: INFO: namespace services-2289 deletion completed in 6.14339002s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.287 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:15:16.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 31 14:15:16.270: INFO: Waiting up to 5m0s for pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9" in namespace "emptydir-8201" to be "success or failure"
Dec 31 14:15:16.295: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.517791ms
Dec 31 14:15:18.304: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033749987s
Dec 31 14:15:20.321: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050302541s
Dec 31 14:15:22.327: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056331887s
Dec 31 14:15:24.349: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078461709s
STEP: Saw pod success
Dec 31 14:15:24.349: INFO: Pod "pod-b27b679d-9f18-418b-beff-9ec6157da8d9" satisfied condition "success or failure"
Dec 31 14:15:24.354: INFO: Trying to get logs from node iruya-node pod pod-b27b679d-9f18-418b-beff-9ec6157da8d9 container test-container: 
STEP: delete the pod
Dec 31 14:15:24.549: INFO: Waiting for pod pod-b27b679d-9f18-418b-beff-9ec6157da8d9 to disappear
Dec 31 14:15:24.562: INFO: Pod pod-b27b679d-9f18-418b-beff-9ec6157da8d9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:15:24.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8201" for this suite.
Dec 31 14:15:30.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:15:30.830: INFO: namespace emptydir-8201 deletion completed in 6.261863272s

• [SLOW TEST:14.709 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:15:30.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 31 14:15:39.699: INFO: Successfully updated pod "annotationupdate293b4fda-c520-4864-ac4d-71dbc428a6eb"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:15:41.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8332" for this suite.
Dec 31 14:16:03.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:16:03.968: INFO: namespace projected-8332 deletion completed in 22.201494831s

• [SLOW TEST:33.138 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:16:03.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2274/configmap-test-c2867709-3fd9-4635-aaed-891176a5382b
STEP: Creating a pod to test consume configMaps
Dec 31 14:16:04.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8" in namespace "configmap-2274" to be "success or failure"
Dec 31 14:16:04.188: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 25.054558ms
Dec 31 14:16:06.199: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036378988s
Dec 31 14:16:08.210: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04723138s
Dec 31 14:16:10.218: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055400552s
Dec 31 14:16:12.225: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062024004s
Dec 31 14:16:14.231: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068749013s
STEP: Saw pod success
Dec 31 14:16:14.231: INFO: Pod "pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8" satisfied condition "success or failure"
Dec 31 14:16:14.234: INFO: Trying to get logs from node iruya-node pod pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8 container env-test: 
STEP: delete the pod
Dec 31 14:16:14.922: INFO: Waiting for pod pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8 to disappear
Dec 31 14:16:14.933: INFO: Pod pod-configmaps-326c9b05-7c21-4e80-9f2d-3d656727d6f8 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:16:14.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2274" for this suite.
Dec 31 14:16:20.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:16:21.081: INFO: namespace configmap-2274 deletion completed in 6.13767902s

• [SLOW TEST:17.112 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:16:21.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Dec 31 14:16:21.162: INFO: namespace kubectl-1190
Dec 31 14:16:21.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1190'
Dec 31 14:16:21.676: INFO: stderr: ""
Dec 31 14:16:21.676: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 31 14:16:22.693: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:22.693: INFO: Found 0 / 1
Dec 31 14:16:23.691: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:23.691: INFO: Found 0 / 1
Dec 31 14:16:24.709: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:24.710: INFO: Found 0 / 1
Dec 31 14:16:25.684: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:25.684: INFO: Found 0 / 1
Dec 31 14:16:26.697: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:26.697: INFO: Found 0 / 1
Dec 31 14:16:27.687: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:27.687: INFO: Found 0 / 1
Dec 31 14:16:28.758: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:28.758: INFO: Found 0 / 1
Dec 31 14:16:29.685: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:29.685: INFO: Found 0 / 1
Dec 31 14:16:30.687: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:30.687: INFO: Found 1 / 1
Dec 31 14:16:30.687: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 31 14:16:30.692: INFO: Selector matched 1 pods for map[app:redis]
Dec 31 14:16:30.692: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 31 14:16:30.692: INFO: wait on redis-master startup in kubectl-1190 
Dec 31 14:16:30.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-jgkbm redis-master --namespace=kubectl-1190'
Dec 31 14:16:30.895: INFO: stderr: ""
Dec 31 14:16:30.895: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 31 Dec 14:16:29.240 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 31 Dec 14:16:29.240 # Server started, Redis version 3.2.12\n1:M 31 Dec 14:16:29.241 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 31 Dec 14:16:29.241 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Dec 31 14:16:30.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1190'
Dec 31 14:16:31.157: INFO: stderr: ""
Dec 31 14:16:31.157: INFO: stdout: "service/rm2 exposed\n"
Dec 31 14:16:31.165: INFO: Service rm2 in namespace kubectl-1190 found.
STEP: exposing service
Dec 31 14:16:33.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1190'
Dec 31 14:16:33.466: INFO: stderr: ""
Dec 31 14:16:33.467: INFO: stdout: "service/rm3 exposed\n"
Dec 31 14:16:33.506: INFO: Service rm3 in namespace kubectl-1190 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:16:35.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1190" for this suite.
Dec 31 14:16:59.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:16:59.743: INFO: namespace kubectl-1190 deletion completed in 24.212907368s

• [SLOW TEST:38.661 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:16:59.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 31 14:17:00.867: INFO: Pod name wrapped-volume-race-17d69c6d-4e92-47ce-9d74-6a77765fb9e2: Found 0 pods out of 5
Dec 31 14:17:05.887: INFO: Pod name wrapped-volume-race-17d69c6d-4e92-47ce-9d74-6a77765fb9e2: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-17d69c6d-4e92-47ce-9d74-6a77765fb9e2 in namespace emptydir-wrapper-9857, will wait for the garbage collector to delete the pods
Dec 31 14:17:34.007: INFO: Deleting ReplicationController wrapped-volume-race-17d69c6d-4e92-47ce-9d74-6a77765fb9e2 took: 20.212474ms
Dec 31 14:17:34.408: INFO: Terminating ReplicationController wrapped-volume-race-17d69c6d-4e92-47ce-9d74-6a77765fb9e2 pods took: 401.226671ms
STEP: Creating RC which spawns configmap-volume pods
Dec 31 14:18:16.815: INFO: Pod name wrapped-volume-race-326776c8-6530-4af3-b526-c14a2bb1e801: Found 0 pods out of 5
Dec 31 14:18:21.828: INFO: Pod name wrapped-volume-race-326776c8-6530-4af3-b526-c14a2bb1e801: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-326776c8-6530-4af3-b526-c14a2bb1e801 in namespace emptydir-wrapper-9857, will wait for the garbage collector to delete the pods
Dec 31 14:18:51.960: INFO: Deleting ReplicationController wrapped-volume-race-326776c8-6530-4af3-b526-c14a2bb1e801 took: 18.468326ms
Dec 31 14:18:52.361: INFO: Terminating ReplicationController wrapped-volume-race-326776c8-6530-4af3-b526-c14a2bb1e801 pods took: 400.699063ms
STEP: Creating RC which spawns configmap-volume pods
Dec 31 14:19:37.755: INFO: Pod name wrapped-volume-race-bb5782fc-3eb4-475e-96a5-c7bd28eb0dc1: Found 0 pods out of 5
Dec 31 14:19:42.767: INFO: Pod name wrapped-volume-race-bb5782fc-3eb4-475e-96a5-c7bd28eb0dc1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bb5782fc-3eb4-475e-96a5-c7bd28eb0dc1 in namespace emptydir-wrapper-9857, will wait for the garbage collector to delete the pods
Dec 31 14:20:16.978: INFO: Deleting ReplicationController wrapped-volume-race-bb5782fc-3eb4-475e-96a5-c7bd28eb0dc1 took: 15.695306ms
Dec 31 14:20:17.279: INFO: Terminating ReplicationController wrapped-volume-race-bb5782fc-3eb4-475e-96a5-c7bd28eb0dc1 pods took: 300.517563ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:21:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9857" for this suite.
Dec 31 14:21:18.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:21:18.656: INFO: namespace emptydir-wrapper-9857 deletion completed in 10.181408863s

• [SLOW TEST:258.913 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:21:18.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 31 14:21:18.737: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:21:39.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1988" for this suite.
Dec 31 14:22:01.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:22:01.383: INFO: namespace init-container-1988 deletion completed in 22.170973091s

• [SLOW TEST:42.726 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:22:01.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6279
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 14:22:01.457: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 14:22:39.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6279 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 14:22:39.757: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 14:22:40.261: INFO: Waiting for endpoints: map[]
Dec 31 14:22:40.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6279 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 14:22:40.275: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 14:22:40.838: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:22:40.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6279" for this suite.
Dec 31 14:23:04.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:23:04.987: INFO: namespace pod-network-test-6279 deletion completed in 24.138065124s

• [SLOW TEST:63.603 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:23:04.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 31 14:23:05.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 31 14:23:05.142: INFO: Waiting for terminating namespaces to be deleted...
Dec 31 14:23:05.145: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 31 14:23:05.170: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 31 14:23:05.170: INFO: 	Container weave ready: true, restart count 0
Dec 31 14:23:05.170: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 14:23:05.170: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.170: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 14:23:05.170: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 31 14:23:05.188: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 31 14:23:05.188: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 14:23:05.188: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 31 14:23:05.188: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 31 14:23:05.188: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container coredns ready: true, restart count 0
Dec 31 14:23:05.188: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container coredns ready: true, restart count 0
Dec 31 14:23:05.188: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 31 14:23:05.188: INFO: 	Container etcd ready: true, restart count 0
Dec 31 14:23:05.188: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 31 14:23:05.189: INFO: 	Container weave ready: true, restart count 0
Dec 31 14:23:05.189: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Dec 31 14:23:05.306: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Dec 31 14:23:05.306: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144.15e57aba60722cda], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4804/filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144 to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144.15e57abb715ed1a1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144.15e57abc57665f1b], Reason = [Created], Message = [Created container filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144.15e57abc7efa0e41], Reason = [Started], Message = [Started container filler-pod-1ae3c9a7-a801-4acb-bccd-edcd82c3a144]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c.15e57aba60722c83], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4804/filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c.15e57abb81f713eb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c.15e57abc7077e997], Reason = [Created], Message = [Created container filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c.15e57abc95e8102a], Reason = [Started], Message = [Started container filler-pod-8f43c1d4-ecaf-4bd2-a221-89526027bc5c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e57abd2e4e7d63], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:23:18.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4804" for this suite.
Dec 31 14:23:25.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:23:26.116: INFO: namespace sched-pred-4804 deletion completed in 7.555621103s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.128 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:23:26.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-112ba489-d25e-47b5-a364-30a13dfb4bb4
STEP: Creating a pod to test consume configMaps
Dec 31 14:23:26.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104" in namespace "projected-9573" to be "success or failure"
Dec 31 14:23:26.491: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Pending", Reason="", readiness=false. Elapsed: 58.692201ms
Dec 31 14:23:28.516: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083228251s
Dec 31 14:23:30.531: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097914728s
Dec 31 14:23:32.554: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121554633s
Dec 31 14:23:34.590: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157106596s
Dec 31 14:23:36.615: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.182016831s
STEP: Saw pod success
Dec 31 14:23:36.615: INFO: Pod "pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104" satisfied condition "success or failure"
Dec 31 14:23:36.631: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 14:23:36.821: INFO: Waiting for pod pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104 to disappear
Dec 31 14:23:36.883: INFO: Pod pod-projected-configmaps-c7e592ca-3606-4fe1-90e1-7815019c0104 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:23:36.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9573" for this suite.
Dec 31 14:23:42.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:23:43.092: INFO: namespace projected-9573 deletion completed in 6.161066799s

• [SLOW TEST:16.974 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:23:43.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:23:43.222: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 31 14:23:48.315: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 31 14:23:52.341: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 31 14:23:54.351: INFO: Creating deployment "test-rollover-deployment"
Dec 31 14:23:54.369: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 31 14:23:56.384: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 31 14:23:56.394: INFO: Ensure that both replica sets have 1 created replica
Dec 31 14:23:56.403: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 31 14:23:56.414: INFO: Updating deployment test-rollover-deployment
Dec 31 14:23:56.414: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 31 14:23:58.476: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 31 14:23:58.490: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 31 14:23:58.497: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:23:58.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399036, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:00.566: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:00.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399036, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:02.549: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:02.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399036, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:04.514: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:04.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399036, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:06.522: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:06.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399044, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:08.515: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:08.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399044, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:10.516: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:10.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399044, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:12.542: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:12.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399044, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:14.526: INFO: all replica sets need to contain the pod-template-hash label
Dec 31 14:24:14.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399044, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713399034, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:24:16.515: INFO: 
Dec 31 14:24:16.515: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 31 14:24:16.527: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9737,SelfLink:/apis/apps/v1/namespaces/deployment-9737/deployments/test-rollover-deployment,UID:02b281cb-febd-406c-91b8-0cd0e549e3e2,ResourceVersion:18780912,Generation:2,CreationTimestamp:2019-12-31 14:23:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-31 14:23:54 +0000 UTC 2019-12-31 14:23:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-31 14:24:15 +0000 UTC 2019-12-31 14:23:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 31 14:24:16.534: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9737,SelfLink:/apis/apps/v1/namespaces/deployment-9737/replicasets/test-rollover-deployment-854595fc44,UID:5f021966-97b9-4a3b-a1ce-993cf963fd4e,ResourceVersion:18780900,Generation:2,CreationTimestamp:2019-12-31 14:23:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 02b281cb-febd-406c-91b8-0cd0e549e3e2 0xc0027662f7 0xc0027662f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 31 14:24:16.534: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 31 14:24:16.534: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9737,SelfLink:/apis/apps/v1/namespaces/deployment-9737/replicasets/test-rollover-controller,UID:dd709cf2-410c-4d23-a2e7-9cba6f47c0c9,ResourceVersion:18780910,Generation:2,CreationTimestamp:2019-12-31 14:23:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 02b281cb-febd-406c-91b8-0cd0e549e3e2 0xc002766227 0xc002766228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 14:24:16.535: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9737,SelfLink:/apis/apps/v1/namespaces/deployment-9737/replicasets/test-rollover-deployment-9b8b997cf,UID:1cf62715-4c09-47f6-a5ab-3819e637c31b,ResourceVersion:18780866,Generation:2,CreationTimestamp:2019-12-31 14:23:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 02b281cb-febd-406c-91b8-0cd0e549e3e2 0xc0027663c0 0xc0027663c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 14:24:16.544: INFO: Pod "test-rollover-deployment-854595fc44-82gxj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-82gxj,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9737,SelfLink:/api/v1/namespaces/deployment-9737/pods/test-rollover-deployment-854595fc44-82gxj,UID:4ef9dac6-b669-4f5d-9cdd-eea3f141d614,ResourceVersion:18780883,Generation:0,CreationTimestamp:2019-12-31 14:23:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 5f021966-97b9-4a3b-a1ce-993cf963fd4e 0xc00275c917 0xc00275c918}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bhts8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bhts8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bhts8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00275c990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00275c9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:23:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:24:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:24:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:23:56 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-31 14:23:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-31 14:24:04 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://0fea33fdcdbc334f9c2abb2a940a40effe2d37db250b8e61b91aeec0c0404e31}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:24:16.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9737" for this suite.
Dec 31 14:24:24.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:24:24.694: INFO: namespace deployment-9737 deletion completed in 8.138978257s

• [SLOW TEST:41.602 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:24:24.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 31 14:27:29.147: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:29.186: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:31.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:31.236: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:33.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:33.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:35.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:35.205: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:37.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:37.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:39.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:39.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:41.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:41.199: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:43.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:43.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:45.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:45.198: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:47.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:47.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:49.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:49.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:51.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:51.200: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:53.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:53.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:55.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:55.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:57.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:57.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:27:59.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:27:59.192: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:01.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:01.199: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:03.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:03.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:05.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:05.198: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:07.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:07.199: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:09.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:09.360: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:11.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:11.199: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:13.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:13.201: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:15.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:15.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:17.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:17.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:19.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:19.232: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:21.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:21.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:23.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:23.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:25.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:25.200: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:27.187: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:27.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:29.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:29.201: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:31.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:31.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:33.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:33.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:35.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:35.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:37.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:37.193: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:39.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:39.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:41.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:41.205: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:43.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:43.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:45.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:45.450: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:47.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:47.198: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:49.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:49.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:51.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:51.204: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:53.187: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:53.200: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:55.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:55.193: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:57.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:57.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:28:59.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:28:59.209: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:01.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:01.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:03.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:03.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:05.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:05.229: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:07.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:07.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:09.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:09.195: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:11.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:11.199: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:13.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:13.194: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:15.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:15.198: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:17.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:17.198: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:19.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:19.207: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:21.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:21.197: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:23.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:23.201: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:25.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:25.196: INFO: Pod pod-with-poststart-exec-hook still exists
Dec 31 14:29:27.186: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Dec 31 14:29:27.195: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:29:27.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4792" for this suite.
Dec 31 14:29:51.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:29:51.335: INFO: namespace container-lifecycle-hook-4792 deletion completed in 24.132395802s

• [SLOW TEST:326.641 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:29:51.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 14:29:51.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5" in namespace "downward-api-9357" to be "success or failure"
Dec 31 14:29:51.614: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.117729ms
Dec 31 14:29:53.623: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026331408s
Dec 31 14:29:55.630: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033728901s
Dec 31 14:29:57.640: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043276195s
Dec 31 14:29:59.663: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066583429s
Dec 31 14:30:01.672: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075591534s
STEP: Saw pod success
Dec 31 14:30:01.672: INFO: Pod "downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5" satisfied condition "success or failure"
Dec 31 14:30:01.677: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5 container client-container: 
STEP: delete the pod
Dec 31 14:30:01.988: INFO: Waiting for pod downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5 to disappear
Dec 31 14:30:02.041: INFO: Pod downwardapi-volume-d558673d-3e82-4329-a657-57e9556712f5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:30:02.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9357" for this suite.
Dec 31 14:30:08.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:30:08.194: INFO: namespace downward-api-9357 deletion completed in 6.14272592s

• [SLOW TEST:16.856 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:30:08.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Dec 31 14:30:08.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 31 14:30:10.453: INFO: stderr: ""
Dec 31 14:30:10.453: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:30:10.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7580" for this suite.
Dec 31 14:30:16.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:30:16.673: INFO: namespace kubectl-7580 deletion completed in 6.207669461s

• [SLOW TEST:8.477 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:30:16.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:30:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-239" for this suite.
Dec 31 14:30:33.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:30:33.192: INFO: namespace emptydir-wrapper-239 deletion completed in 6.13158086s

• [SLOW TEST:16.518 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:30:33.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-815bd86a-34cf-405d-9e03-f329a85aafcf
STEP: Creating a pod to test consume secrets
Dec 31 14:30:33.267: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399" in namespace "projected-2302" to be "success or failure"
Dec 31 14:30:33.270: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399": Phase="Pending", Reason="", readiness=false. Elapsed: 3.417199ms
Dec 31 14:30:35.282: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014707243s
Dec 31 14:30:37.289: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021519572s
Dec 31 14:30:39.946: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399": Phase="Pending", Reason="", readiness=false. Elapsed: 6.678842064s
Dec 31 14:30:41.953: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.6860731s
STEP: Saw pod success
Dec 31 14:30:41.953: INFO: Pod "pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399" satisfied condition "success or failure"
Dec 31 14:30:41.959: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 14:30:42.121: INFO: Waiting for pod pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399 to disappear
Dec 31 14:30:42.133: INFO: Pod pod-projected-secrets-4b38bdad-1a5f-4868-af57-ccaa74504399 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:30:42.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2302" for this suite.
Dec 31 14:30:48.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:30:48.432: INFO: namespace projected-2302 deletion completed in 6.286229472s

• [SLOW TEST:15.239 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:30:48.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Dec 31 14:30:59.219: INFO: Successfully updated pod "labelsupdate25c92d33-257c-4df8-aaa3-229947fae089"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:31:01.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1116" for this suite.
Dec 31 14:31:23.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:31:23.435: INFO: namespace projected-1116 deletion completed in 22.126731511s

• [SLOW TEST:35.000 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:31:23.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Dec 31 14:31:23.635: INFO: Waiting up to 5m0s for pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4" in namespace "containers-7834" to be "success or failure"
Dec 31 14:31:23.652: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.496264ms
Dec 31 14:31:25.717: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081987213s
Dec 31 14:31:27.726: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090663346s
Dec 31 14:31:29.743: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108066812s
Dec 31 14:31:31.778: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142745159s
Dec 31 14:31:33.788: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152589718s
STEP: Saw pod success
Dec 31 14:31:33.788: INFO: Pod "client-containers-9ec40799-686e-4fea-9858-790b10f893c4" satisfied condition "success or failure"
Dec 31 14:31:33.792: INFO: Trying to get logs from node iruya-node pod client-containers-9ec40799-686e-4fea-9858-790b10f893c4 container test-container: 
STEP: delete the pod
Dec 31 14:31:34.065: INFO: Waiting for pod client-containers-9ec40799-686e-4fea-9858-790b10f893c4 to disappear
Dec 31 14:31:34.078: INFO: Pod client-containers-9ec40799-686e-4fea-9858-790b10f893c4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:31:34.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7834" for this suite.
Dec 31 14:31:40.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:31:40.325: INFO: namespace containers-7834 deletion completed in 6.239459887s

• [SLOW TEST:16.890 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:31:40.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-f99b6cf9-485f-4df2-8847-2ca2e67b8fa6
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:31:40.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5008" for this suite.
Dec 31 14:31:46.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:31:46.870: INFO: namespace configmap-5008 deletion completed in 6.36227417s

• [SLOW TEST:6.543 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:31:46.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-c574a8e5-9dcf-49aa-a844-837776406430 in namespace container-probe-9346
Dec 31 14:31:55.064: INFO: Started pod test-webserver-c574a8e5-9dcf-49aa-a844-837776406430 in namespace container-probe-9346
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 14:31:55.068: INFO: Initial restart count of pod test-webserver-c574a8e5-9dcf-49aa-a844-837776406430 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:35:55.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9346" for this suite.
Dec 31 14:36:01.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:36:01.707: INFO: namespace container-probe-9346 deletion completed in 6.152763188s

• [SLOW TEST:254.837 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:36:01.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-a3d4ff74-6540-437d-8de5-163f91dfe70d in namespace container-probe-2866
Dec 31 14:36:09.913: INFO: Started pod busybox-a3d4ff74-6540-437d-8de5-163f91dfe70d in namespace container-probe-2866
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 14:36:09.921: INFO: Initial restart count of pod busybox-a3d4ff74-6540-437d-8de5-163f91dfe70d is 0
Dec 31 14:37:06.220: INFO: Restart count of pod container-probe-2866/busybox-a3d4ff74-6540-437d-8de5-163f91dfe70d is now 1 (56.299580477s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:37:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2866" for this suite.
Dec 31 14:37:12.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:37:12.447: INFO: namespace container-probe-2866 deletion completed in 6.176752993s

• [SLOW TEST:70.738 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:37:12.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:37:12.620: INFO: Create a RollingUpdate DaemonSet
Dec 31 14:37:12.642: INFO: Check that daemon pods launch on every node of the cluster
Dec 31 14:37:12.667: INFO: Number of nodes with available pods: 0
Dec 31 14:37:12.667: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:13.690: INFO: Number of nodes with available pods: 0
Dec 31 14:37:13.690: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:14.685: INFO: Number of nodes with available pods: 0
Dec 31 14:37:14.685: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:15.736: INFO: Number of nodes with available pods: 0
Dec 31 14:37:15.736: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:16.738: INFO: Number of nodes with available pods: 0
Dec 31 14:37:16.738: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:17.690: INFO: Number of nodes with available pods: 0
Dec 31 14:37:17.690: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:18.815: INFO: Number of nodes with available pods: 0
Dec 31 14:37:18.815: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:22.169: INFO: Number of nodes with available pods: 0
Dec 31 14:37:22.169: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:22.812: INFO: Number of nodes with available pods: 0
Dec 31 14:37:22.812: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:23.688: INFO: Number of nodes with available pods: 0
Dec 31 14:37:23.688: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:37:24.683: INFO: Number of nodes with available pods: 2
Dec 31 14:37:24.684: INFO: Number of running nodes: 2, number of available pods: 2
Dec 31 14:37:24.684: INFO: Update the DaemonSet to trigger a rollout
Dec 31 14:37:24.696: INFO: Updating DaemonSet daemon-set
Dec 31 14:37:39.059: INFO: Roll back the DaemonSet before rollout is complete
Dec 31 14:37:39.072: INFO: Updating DaemonSet daemon-set
Dec 31 14:37:39.072: INFO: Make sure DaemonSet rollback is complete
Dec 31 14:37:39.354: INFO: Wrong image for pod: daemon-set-bnjks. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 31 14:37:39.354: INFO: Pod daemon-set-bnjks is not available
Dec 31 14:37:40.509: INFO: Wrong image for pod: daemon-set-bnjks. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 31 14:37:40.510: INFO: Pod daemon-set-bnjks is not available
Dec 31 14:37:41.485: INFO: Wrong image for pod: daemon-set-bnjks. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 31 14:37:41.485: INFO: Pod daemon-set-bnjks is not available
Dec 31 14:37:43.144: INFO: Wrong image for pod: daemon-set-bnjks. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Dec 31 14:37:43.144: INFO: Pod daemon-set-bnjks is not available
Dec 31 14:37:44.481: INFO: Pod daemon-set-lxrhl is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1686, will wait for the garbage collector to delete the pods
Dec 31 14:37:44.566: INFO: Deleting DaemonSet.extensions daemon-set took: 9.32359ms
Dec 31 14:37:45.167: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.451926ms
Dec 31 14:37:56.653: INFO: Number of nodes with available pods: 0
Dec 31 14:37:56.653: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 14:37:56.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1686/daemonsets","resourceVersion":"18782358"},"items":null}

Dec 31 14:37:56.662: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1686/pods","resourceVersion":"18782358"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:37:56.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1686" for this suite.
Dec 31 14:38:02.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:38:02.844: INFO: namespace daemonsets-1686 deletion completed in 6.163734915s

• [SLOW TEST:50.397 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:38:02.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 14:38:03.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440" in namespace "projected-8515" to be "success or failure"
Dec 31 14:38:03.133: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Pending", Reason="", readiness=false. Elapsed: 27.46907ms
Dec 31 14:38:05.139: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034219819s
Dec 31 14:38:07.149: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043368719s
Dec 31 14:38:09.158: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053178277s
Dec 31 14:38:11.175: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069618032s
Dec 31 14:38:13.187: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0820688s
STEP: Saw pod success
Dec 31 14:38:13.187: INFO: Pod "downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440" satisfied condition "success or failure"
Dec 31 14:38:13.191: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440 container client-container: 
STEP: delete the pod
Dec 31 14:38:13.284: INFO: Waiting for pod downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440 to disappear
Dec 31 14:38:13.299: INFO: Pod downwardapi-volume-ec2bf361-9f71-499a-b120-34921f193440 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:38:13.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8515" for this suite.
Dec 31 14:38:19.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:38:19.551: INFO: namespace projected-8515 deletion completed in 6.236020589s

• [SLOW TEST:16.707 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:38:19.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Dec 31 14:38:20.209: INFO: created pod pod-service-account-defaultsa
Dec 31 14:38:20.210: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 31 14:38:20.238: INFO: created pod pod-service-account-mountsa
Dec 31 14:38:20.238: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 31 14:38:20.323: INFO: created pod pod-service-account-nomountsa
Dec 31 14:38:20.323: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 31 14:38:20.349: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 31 14:38:20.349: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 31 14:38:20.422: INFO: created pod pod-service-account-mountsa-mountspec
Dec 31 14:38:20.422: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 31 14:38:20.959: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 31 14:38:20.959: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 31 14:38:21.611: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 31 14:38:21.611: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 31 14:38:22.131: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 31 14:38:22.131: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 31 14:38:22.453: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 31 14:38:22.453: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:38:22.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3889" for this suite.
Dec 31 14:38:52.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:38:52.261: INFO: namespace svcaccounts-3889 deletion completed in 29.791280083s

• [SLOW TEST:32.710 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:38:52.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-2958, will wait for the garbage collector to delete the pods
Dec 31 14:39:04.598: INFO: Deleting Job.batch foo took: 121.215254ms
Dec 31 14:39:04.899: INFO: Terminating Job.batch foo pods took: 301.377492ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:39:46.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2958" for this suite.
Dec 31 14:39:52.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:39:52.919: INFO: namespace job-2958 deletion completed in 6.191972274s

• [SLOW TEST:60.657 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:39:52.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 14:39:53.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3" in namespace "projected-1611" to be "success or failure"
Dec 31 14:39:53.084: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.770922ms
Dec 31 14:39:55.095: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025222401s
Dec 31 14:39:57.106: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036366743s
Dec 31 14:39:59.112: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042220344s
Dec 31 14:40:01.119: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04974573s
Dec 31 14:40:03.129: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059604377s
STEP: Saw pod success
Dec 31 14:40:03.129: INFO: Pod "downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3" satisfied condition "success or failure"
Dec 31 14:40:03.134: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3 container client-container: 
STEP: delete the pod
Dec 31 14:40:03.241: INFO: Waiting for pod downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3 to disappear
Dec 31 14:40:03.272: INFO: Pod downwardapi-volume-fdff2661-d415-4ca5-bac5-3fa7c74c54e3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:40:03.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1611" for this suite.
Dec 31 14:40:09.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:40:09.486: INFO: namespace projected-1611 deletion completed in 6.205043265s

• [SLOW TEST:16.566 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:40:09.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:40:09.563: INFO: Creating deployment "test-recreate-deployment"
Dec 31 14:40:09.573: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 31 14:40:09.650: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 31 14:40:11.669: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 31 14:40:11.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:40:13.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:40:15.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:40:17.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400009, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 31 14:40:19.684: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 31 14:40:19.734: INFO: Updating deployment test-recreate-deployment
Dec 31 14:40:19.734: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 31 14:40:20.203: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8146,SelfLink:/apis/apps/v1/namespaces/deployment-8146/deployments/test-recreate-deployment,UID:e4505833-8df5-4fce-878b-96196d002ca0,ResourceVersion:18782816,Generation:2,CreationTimestamp:2019-12-31 14:40:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-12-31 14:40:20 +0000 UTC 2019-12-31 14:40:20 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-12-31 14:40:20 +0000 UTC 2019-12-31 14:40:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Dec 31 14:40:20.207: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8146,SelfLink:/apis/apps/v1/namespaces/deployment-8146/replicasets/test-recreate-deployment-5c8c9cc69d,UID:ecbb1624-d608-4d48-8f60-4d91408566a5,ResourceVersion:18782813,Generation:1,CreationTimestamp:2019-12-31 14:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e4505833-8df5-4fce-878b-96196d002ca0 0xc001fb01c7 0xc001fb01c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 14:40:20.207: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 31 14:40:20.208: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8146,SelfLink:/apis/apps/v1/namespaces/deployment-8146/replicasets/test-recreate-deployment-6df85df6b9,UID:90735e4f-d0d2-43bc-8ed2-6a4527c6ea4b,ResourceVersion:18782802,Generation:2,CreationTimestamp:2019-12-31 14:40:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e4505833-8df5-4fce-878b-96196d002ca0 0xc001fb03a7 0xc001fb03a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Dec 31 14:40:20.217: INFO: Pod "test-recreate-deployment-5c8c9cc69d-khqvf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-khqvf,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8146,SelfLink:/api/v1/namespaces/deployment-8146/pods/test-recreate-deployment-5c8c9cc69d-khqvf,UID:2a640d5d-1f3b-45db-ab2c-0a6cee191ba5,ResourceVersion:18782815,Generation:0,CreationTimestamp:2019-12-31 14:40:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d ecbb1624-d608-4d48-8f60-4d91408566a5 0xc001fb1297 0xc001fb1298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rsswt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rsswt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rsswt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fb1360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fb13d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:40:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:40:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:40:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:40:20 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2019-12-31 14:40:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:40:20.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8146" for this suite.
Dec 31 14:40:28.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:40:28.371: INFO: namespace deployment-8146 deletion completed in 8.148251081s

• [SLOW TEST:18.884 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:40:28.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 in namespace container-probe-7313
Dec 31 14:40:40.691: INFO: Started pod liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 in namespace container-probe-7313
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 14:40:40.695: INFO: Initial restart count of pod liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is 0
Dec 31 14:40:54.876: INFO: Restart count of pod container-probe-7313/liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is now 1 (14.180568229s elapsed)
Dec 31 14:41:15.105: INFO: Restart count of pod container-probe-7313/liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is now 2 (34.409942901s elapsed)
Dec 31 14:41:35.239: INFO: Restart count of pod container-probe-7313/liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is now 3 (54.543637053s elapsed)
Dec 31 14:41:55.352: INFO: Restart count of pod container-probe-7313/liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is now 4 (1m14.657317113s elapsed)
Dec 31 14:43:05.740: INFO: Restart count of pod container-probe-7313/liveness-bcae4310-38e5-4871-8409-7f0991fa4ea8 is now 5 (2m25.044702953s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:43:05.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7313" for this suite.
Dec 31 14:43:11.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:43:12.022: INFO: namespace container-probe-7313 deletion completed in 6.221709364s

• [SLOW TEST:163.651 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:43:12.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:43:12.187: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 31 14:43:17.296: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 31 14:43:23.325: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Dec 31 14:43:31.401: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2587,SelfLink:/apis/apps/v1/namespaces/deployment-2587/deployments/test-cleanup-deployment,UID:0f754e18-1d3c-4031-8d2a-0d7a887d7d02,ResourceVersion:18783193,Generation:1,CreationTimestamp:2019-12-31 14:43:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-12-31 14:43:23 +0000 UTC 2019-12-31 14:43:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-12-31 14:43:30 +0000 UTC 2019-12-31 14:43:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Dec 31 14:43:31.406: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2587,SelfLink:/apis/apps/v1/namespaces/deployment-2587/replicasets/test-cleanup-deployment-55bbcbc84c,UID:16fcb17b-94b3-4a7e-b1b0-4991d95b2536,ResourceVersion:18783182,Generation:1,CreationTimestamp:2019-12-31 14:43:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0f754e18-1d3c-4031-8d2a-0d7a887d7d02 0xc002a453c7 0xc002a453c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Dec 31 14:43:31.412: INFO: Pod "test-cleanup-deployment-55bbcbc84c-dvr48" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-dvr48,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2587,SelfLink:/api/v1/namespaces/deployment-2587/pods/test-cleanup-deployment-55bbcbc84c-dvr48,UID:f024f0c5-f58c-40d6-a037-a6d99145d614,ResourceVersion:18783181,Generation:0,CreationTimestamp:2019-12-31 14:43:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 16fcb17b-94b3-4a7e-b1b0-4991d95b2536 0xc002a459b7 0xc002a459b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f8txd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f8txd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-f8txd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a45a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a45a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:43:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:43:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:43:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-31 14:43:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2019-12-31 14:43:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-12-31 14:43:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://26c34e324fa136608b69ceb4f85f2f251b79399a9141f3bc4db0f6b861af8a90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:43:31.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2587" for this suite.
Dec 31 14:43:37.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:43:37.637: INFO: namespace deployment-2587 deletion completed in 6.218857137s

• [SLOW TEST:25.614 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:43:37.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-dwwc
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 14:43:37.849: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dwwc" in namespace "subpath-2716" to be "success or failure"
Dec 31 14:43:37.928: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Pending", Reason="", readiness=false. Elapsed: 78.980775ms
Dec 31 14:43:39.939: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089953838s
Dec 31 14:43:42.079: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.229889586s
Dec 31 14:43:44.127: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.278374591s
Dec 31 14:43:46.138: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.289452099s
Dec 31 14:43:48.148: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 10.299282739s
Dec 31 14:43:50.158: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 12.309013511s
Dec 31 14:43:52.166: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 14.317485236s
Dec 31 14:43:54.177: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 16.327851988s
Dec 31 14:43:56.187: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 18.338456885s
Dec 31 14:43:58.195: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 20.346565204s
Dec 31 14:44:00.203: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 22.353826608s
Dec 31 14:44:02.214: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 24.365123256s
Dec 31 14:44:04.225: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 26.376515813s
Dec 31 14:44:06.235: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 28.385891889s
Dec 31 14:44:08.245: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Running", Reason="", readiness=true. Elapsed: 30.395920774s
Dec 31 14:44:10.255: INFO: Pod "pod-subpath-test-projected-dwwc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.406415602s
STEP: Saw pod success
Dec 31 14:44:10.255: INFO: Pod "pod-subpath-test-projected-dwwc" satisfied condition "success or failure"
Dec 31 14:44:10.259: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-dwwc container test-container-subpath-projected-dwwc: 
STEP: delete the pod
Dec 31 14:44:10.317: INFO: Waiting for pod pod-subpath-test-projected-dwwc to disappear
Dec 31 14:44:10.328: INFO: Pod pod-subpath-test-projected-dwwc no longer exists
STEP: Deleting pod pod-subpath-test-projected-dwwc
Dec 31 14:44:10.328: INFO: Deleting pod "pod-subpath-test-projected-dwwc" in namespace "subpath-2716"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:44:10.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2716" for this suite.
Dec 31 14:44:16.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:44:16.722: INFO: namespace subpath-2716 deletion completed in 6.385344422s

• [SLOW TEST:39.084 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:44:16.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 31 14:44:25.542: INFO: Successfully updated pod "pod-update-2c583246-938a-4a65-9fc1-5476cea33351"
STEP: verifying the updated pod is in kubernetes
Dec 31 14:44:25.592: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:44:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2204" for this suite.
Dec 31 14:44:53.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:44:53.790: INFO: namespace pods-2204 deletion completed in 28.189418508s

• [SLOW TEST:37.068 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:44:53.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-45204045-1f34-4ade-9cb6-e33e40a750d8 in namespace container-probe-273
Dec 31 14:45:01.998: INFO: Started pod busybox-45204045-1f34-4ade-9cb6-e33e40a750d8 in namespace container-probe-273
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 14:45:02.004: INFO: Initial restart count of pod busybox-45204045-1f34-4ade-9cb6-e33e40a750d8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:49:02.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-273" for this suite.
Dec 31 14:49:08.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:49:08.537: INFO: namespace container-probe-273 deletion completed in 6.288029568s

• [SLOW TEST:254.747 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:49:08.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1541
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1541
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1541
Dec 31 14:49:08.820: INFO: Found 0 stateful pods, waiting for 1
Dec 31 14:49:18.833: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 31 14:49:18.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 14:49:21.723: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 14:49:21.724: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 14:49:21.724: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 14:49:21.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 31 14:49:31.749: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 14:49:31.749: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 14:49:31.776: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999496s
Dec 31 14:49:32.785: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99007948s
Dec 31 14:49:33.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980636019s
Dec 31 14:49:34.810: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.96632225s
Dec 31 14:49:35.825: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955565527s
Dec 31 14:49:36.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.941082115s
Dec 31 14:49:37.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.929803147s
Dec 31 14:49:38.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.852528348s
Dec 31 14:49:39.939: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.842857887s
Dec 31 14:49:40.947: INFO: Verifying statefulset ss doesn't scale past 1 for another 826.034979ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1541
Dec 31 14:49:41.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 14:49:42.815: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 14:49:42.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 14:49:42.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 14:49:42.826: INFO: Found 1 stateful pods, waiting for 3
Dec 31 14:49:52.851: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 14:49:52.851: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 14:49:52.852: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 14:50:02.846: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 14:50:02.846: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 14:50:02.846: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 31 14:50:02.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 14:50:03.267: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 14:50:03.267: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 14:50:03.267: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 14:50:03.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 14:50:03.765: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 14:50:03.765: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 14:50:03.765: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 14:50:03.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 14:50:04.709: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 14:50:04.709: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 14:50:04.709: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 14:50:04.709: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 14:50:04.758: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 14:50:04.758: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 14:50:04.758: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 31 14:50:04.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999573s
Dec 31 14:50:05.807: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987283587s
Dec 31 14:50:06.816: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975362709s
Dec 31 14:50:07.831: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.966428722s
Dec 31 14:50:08.845: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.951890327s
Dec 31 14:50:09.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.937373352s
Dec 31 14:50:10.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.882778991s
Dec 31 14:50:11.944: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.852510838s
Dec 31 14:50:12.965: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.838965799s
Dec 31 14:50:13.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 817.933752ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1541
Dec 31 14:50:14.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 14:50:15.699: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 14:50:15.699: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 14:50:15.699: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 14:50:15.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 14:50:16.059: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 14:50:16.059: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 14:50:16.059: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 14:50:16.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1541 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 14:50:16.826: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 14:50:16.826: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 14:50:16.826: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 14:50:16.826: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 31 14:50:46.970: INFO: Deleting all statefulset in ns statefulset-1541
Dec 31 14:50:46.975: INFO: Scaling statefulset ss to 0
Dec 31 14:50:46.988: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 14:50:46.991: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:50:47.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1541" for this suite.
Dec 31 14:50:53.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:50:53.251: INFO: namespace statefulset-1541 deletion completed in 6.20281879s

• [SLOW TEST:104.711 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:50:53.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Dec 31 14:50:53.409: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 31 14:50:53.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:53.887: INFO: stderr: ""
Dec 31 14:50:53.887: INFO: stdout: "service/redis-slave created\n"
Dec 31 14:50:53.889: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 31 14:50:53.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:54.549: INFO: stderr: ""
Dec 31 14:50:54.550: INFO: stdout: "service/redis-master created\n"
Dec 31 14:50:54.552: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 31 14:50:54.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:55.125: INFO: stderr: ""
Dec 31 14:50:55.125: INFO: stdout: "service/frontend created\n"
Dec 31 14:50:55.126: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 31 14:50:55.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:55.623: INFO: stderr: ""
Dec 31 14:50:55.623: INFO: stdout: "deployment.apps/frontend created\n"
Dec 31 14:50:55.623: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 31 14:50:55.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:55.929: INFO: stderr: ""
Dec 31 14:50:55.929: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 31 14:50:55.930: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 31 14:50:55.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6238'
Dec 31 14:50:56.417: INFO: stderr: ""
Dec 31 14:50:56.417: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 31 14:50:56.418: INFO: Waiting for all frontend pods to be Running.
Dec 31 14:51:21.470: INFO: Waiting for frontend to serve content.
Dec 31 14:51:21.742: INFO: Trying to add a new entry to the guestbook.
Dec 31 14:51:21.766: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 31 14:51:21.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:22.051: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:22.051: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 31 14:51:22.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:22.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:22.298: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 31 14:51:22.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:22.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:22.497: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 31 14:51:22.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:22.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:22.715: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 31 14:51:22.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:22.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:22.933: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 31 14:51:22.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6238'
Dec 31 14:51:23.264: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 31 14:51:23.264: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:51:23.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6238" for this suite.
Dec 31 14:52:10.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:52:10.812: INFO: namespace kubectl-6238 deletion completed in 46.920447763s

• [SLOW TEST:77.559 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:52:10.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-0099e57e-e939-4b3b-b2e3-651881c7f00d
STEP: Creating a pod to test consume configMaps
Dec 31 14:52:10.981: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437" in namespace "projected-7318" to be "success or failure"
Dec 31 14:52:10.990: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640795ms
Dec 31 14:52:13.002: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021053608s
Dec 31 14:52:15.011: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030361168s
Dec 31 14:52:17.024: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043456851s
Dec 31 14:52:19.044: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063088659s
STEP: Saw pod success
Dec 31 14:52:19.044: INFO: Pod "pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437" satisfied condition "success or failure"
Dec 31 14:52:19.053: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 14:52:19.259: INFO: Waiting for pod pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437 to disappear
Dec 31 14:52:19.277: INFO: Pod pod-projected-configmaps-a53ce8ca-394c-404a-a0c2-2aa379999437 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:52:19.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7318" for this suite.
Dec 31 14:52:25.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:52:25.569: INFO: namespace projected-7318 deletion completed in 6.284933127s

• [SLOW TEST:14.757 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:52:25.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-e0947b90-d8b0-4605-aec4-b37421873e5d
STEP: Creating a pod to test consume configMaps
Dec 31 14:52:25.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7" in namespace "configmap-9390" to be "success or failure"
Dec 31 14:52:25.710: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.380627ms
Dec 31 14:52:27.720: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02889236s
Dec 31 14:52:29.729: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037943762s
Dec 31 14:52:31.740: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048526616s
Dec 31 14:52:33.751: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059750728s
Dec 31 14:52:35.760: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068889834s
STEP: Saw pod success
Dec 31 14:52:35.760: INFO: Pod "pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7" satisfied condition "success or failure"
Dec 31 14:52:35.764: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7 container configmap-volume-test: 
STEP: delete the pod
Dec 31 14:52:35.904: INFO: Waiting for pod pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7 to disappear
Dec 31 14:52:35.923: INFO: Pod pod-configmaps-b99aa0f6-a750-4387-a12c-74e3f09f0ab7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:52:35.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9390" for this suite.
Dec 31 14:52:41.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:52:42.276: INFO: namespace configmap-9390 deletion completed in 6.345569739s

• [SLOW TEST:16.706 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:52:42.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 31 14:52:52.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-9dab5acd-d0dd-4c52-8622-1b036e9a1851 -c busybox-main-container --namespace=emptydir-6760 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 31 14:52:53.176: INFO: stderr: ""
Dec 31 14:52:53.176: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:52:53.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6760" for this suite.
Dec 31 14:52:59.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:52:59.365: INFO: namespace emptydir-6760 deletion completed in 6.179121141s

• [SLOW TEST:17.083 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:52:59.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-5lld
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 14:52:59.517: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5lld" in namespace "subpath-2491" to be "success or failure"
Dec 31 14:52:59.547: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Pending", Reason="", readiness=false. Elapsed: 29.767597ms
Dec 31 14:53:01.557: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039549697s
Dec 31 14:53:03.564: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046770231s
Dec 31 14:53:05.575: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057674341s
Dec 31 14:53:07.592: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07457407s
Dec 31 14:53:09.601: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 10.084017263s
Dec 31 14:53:11.619: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 12.101384313s
Dec 31 14:53:13.640: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 14.122771935s
Dec 31 14:53:15.651: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 16.133881222s
Dec 31 14:53:17.663: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 18.145299284s
Dec 31 14:53:19.675: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 20.157747909s
Dec 31 14:53:21.697: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 22.17985068s
Dec 31 14:53:23.707: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 24.189155338s
Dec 31 14:53:25.722: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 26.204645667s
Dec 31 14:53:27.734: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Running", Reason="", readiness=true. Elapsed: 28.216877138s
Dec 31 14:53:29.743: INFO: Pod "pod-subpath-test-configmap-5lld": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.225496364s
STEP: Saw pod success
Dec 31 14:53:29.743: INFO: Pod "pod-subpath-test-configmap-5lld" satisfied condition "success or failure"
Dec 31 14:53:29.749: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-5lld container test-container-subpath-configmap-5lld: 
STEP: delete the pod
Dec 31 14:53:29.854: INFO: Waiting for pod pod-subpath-test-configmap-5lld to disappear
Dec 31 14:53:29.924: INFO: Pod pod-subpath-test-configmap-5lld no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5lld
Dec 31 14:53:29.924: INFO: Deleting pod "pod-subpath-test-configmap-5lld" in namespace "subpath-2491"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:53:29.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2491" for this suite.
Dec 31 14:53:35.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:53:36.118: INFO: namespace subpath-2491 deletion completed in 6.162290533s

• [SLOW TEST:36.753 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:53:36.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6664.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6664.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6664.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6664.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6664.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 31 14:53:48.375: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.385: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.391: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6664.svc.cluster.local from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.402: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.409: INFO: Unable to read jessie_udp@PodARecord from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.420: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2: the server could not find the requested resource (get pods dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2)
Dec 31 14:53:48.420: INFO: Lookups using dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6664.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 31 14:53:53.493: INFO: DNS probes using dns-6664/dns-test-90067f5e-d0de-4faf-bd93-2805900ddca2 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:53:53.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6664" for this suite.
Dec 31 14:53:59.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:54:00.046: INFO: namespace dns-6664 deletion completed in 6.289204946s

• [SLOW TEST:23.923 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:54:00.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 14:54:00.173: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 31 14:54:00.208: INFO: Number of nodes with available pods: 0
Dec 31 14:54:00.208: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 31 14:54:00.267: INFO: Number of nodes with available pods: 0
Dec 31 14:54:00.267: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:01.277: INFO: Number of nodes with available pods: 0
Dec 31 14:54:01.277: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:02.278: INFO: Number of nodes with available pods: 0
Dec 31 14:54:02.278: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:03.278: INFO: Number of nodes with available pods: 0
Dec 31 14:54:03.279: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:04.277: INFO: Number of nodes with available pods: 0
Dec 31 14:54:04.277: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:05.278: INFO: Number of nodes with available pods: 0
Dec 31 14:54:05.278: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:06.284: INFO: Number of nodes with available pods: 0
Dec 31 14:54:06.285: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:07.278: INFO: Number of nodes with available pods: 0
Dec 31 14:54:07.278: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:08.277: INFO: Number of nodes with available pods: 1
Dec 31 14:54:08.277: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 31 14:54:08.338: INFO: Number of nodes with available pods: 1
Dec 31 14:54:08.339: INFO: Number of running nodes: 0, number of available pods: 1
Dec 31 14:54:09.348: INFO: Number of nodes with available pods: 0
Dec 31 14:54:09.348: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 31 14:54:09.376: INFO: Number of nodes with available pods: 0
Dec 31 14:54:09.376: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:10.385: INFO: Number of nodes with available pods: 0
Dec 31 14:54:10.385: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:11.387: INFO: Number of nodes with available pods: 0
Dec 31 14:54:11.387: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:12.385: INFO: Number of nodes with available pods: 0
Dec 31 14:54:12.385: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:13.386: INFO: Number of nodes with available pods: 0
Dec 31 14:54:13.386: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:14.387: INFO: Number of nodes with available pods: 0
Dec 31 14:54:14.387: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:15.388: INFO: Number of nodes with available pods: 0
Dec 31 14:54:15.388: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:16.386: INFO: Number of nodes with available pods: 0
Dec 31 14:54:16.386: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:17.389: INFO: Number of nodes with available pods: 0
Dec 31 14:54:17.389: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:18.496: INFO: Number of nodes with available pods: 0
Dec 31 14:54:18.496: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:19.390: INFO: Number of nodes with available pods: 0
Dec 31 14:54:19.390: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:20.387: INFO: Number of nodes with available pods: 0
Dec 31 14:54:20.387: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:21.390: INFO: Number of nodes with available pods: 0
Dec 31 14:54:21.390: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:22.384: INFO: Number of nodes with available pods: 0
Dec 31 14:54:22.384: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:23.387: INFO: Number of nodes with available pods: 0
Dec 31 14:54:23.387: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:24.386: INFO: Number of nodes with available pods: 0
Dec 31 14:54:24.386: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:25.385: INFO: Number of nodes with available pods: 0
Dec 31 14:54:25.385: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:26.389: INFO: Number of nodes with available pods: 0
Dec 31 14:54:26.389: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:27.385: INFO: Number of nodes with available pods: 0
Dec 31 14:54:27.385: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:28.389: INFO: Number of nodes with available pods: 0
Dec 31 14:54:28.389: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:29.403: INFO: Number of nodes with available pods: 0
Dec 31 14:54:29.403: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:30.386: INFO: Number of nodes with available pods: 0
Dec 31 14:54:30.386: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:31.387: INFO: Number of nodes with available pods: 0
Dec 31 14:54:31.387: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:32.404: INFO: Number of nodes with available pods: 0
Dec 31 14:54:32.404: INFO: Node iruya-node is running more than one daemon pod
Dec 31 14:54:33.382: INFO: Number of nodes with available pods: 1
Dec 31 14:54:33.382: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-306, will wait for the garbage collector to delete the pods
Dec 31 14:54:33.452: INFO: Deleting DaemonSet.extensions daemon-set took: 12.928995ms
Dec 31 14:54:33.753: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.875963ms
Dec 31 14:54:40.868: INFO: Number of nodes with available pods: 0
Dec 31 14:54:40.868: INFO: Number of running nodes: 0, number of available pods: 0
Dec 31 14:54:40.874: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-306/daemonsets","resourceVersion":"18784763"},"items":null}

Dec 31 14:54:40.879: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-306/pods","resourceVersion":"18784763"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:54:40.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-306" for this suite.
Dec 31 14:54:47.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:54:47.241: INFO: namespace daemonsets-306 deletion completed in 6.252950799s

• [SLOW TEST:47.194 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:54:47.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Dec 31 14:54:47.286: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix207986426/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:54:47.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5019" for this suite.
Dec 31 14:54:53.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:54:53.556: INFO: namespace kubectl-5019 deletion completed in 6.166000128s

• [SLOW TEST:6.314 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:54:53.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 31 14:54:53.737: INFO: PodSpec: initContainers in spec.initContainers
Dec 31 14:55:58.995: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-759fa33d-d316-4e2d-a14b-931bba9cf3a3", GenerateName:"", Namespace:"init-container-9285", SelfLink:"/api/v1/namespaces/init-container-9285/pods/pod-init-759fa33d-d316-4e2d-a14b-931bba9cf3a3", UID:"d632cf28-0849-4a3d-bc1b-63eb81551089", ResourceVersion:"18784924", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713400893, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"737289863"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-kd68c", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002d28000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd68c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd68c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-kd68c", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00005cec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003044000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00005d760)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00005d790)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00005d798), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00005d79c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713400893, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00149e080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000fc4230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000fc42a0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://3c815e357564a18a0e5b8955323a0559da7ec56c6048ceb72c1488a3a500fb8a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00149e0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00149e0a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:55:58.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9285" for this suite.
Dec 31 14:56:21.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:56:21.168: INFO: namespace init-container-9285 deletion completed in 22.145875116s

• [SLOW TEST:87.611 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:56:21.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2876
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 14:56:21.349: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 14:56:59.672: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2876 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 14:56:59.672: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 14:57:00.193: INFO: Waiting for endpoints: map[]
Dec 31 14:57:00.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2876 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 14:57:00.200: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 14:57:01.338: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:57:01.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2876" for this suite.
Dec 31 14:57:25.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:57:25.579: INFO: namespace pod-network-test-2876 deletion completed in 24.228227923s

• [SLOW TEST:64.411 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:57:25.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Dec 31 14:57:25.719: INFO: Waiting up to 5m0s for pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da" in namespace "containers-1059" to be "success or failure"
Dec 31 14:57:25.725: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da": Phase="Pending", Reason="", readiness=false. Elapsed: 5.804287ms
Dec 31 14:57:27.735: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015440115s
Dec 31 14:57:29.743: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023500077s
Dec 31 14:57:31.758: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038958296s
Dec 31 14:57:33.768: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048374394s
STEP: Saw pod success
Dec 31 14:57:33.768: INFO: Pod "client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da" satisfied condition "success or failure"
Dec 31 14:57:33.787: INFO: Trying to get logs from node iruya-node pod client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da container test-container: 
STEP: delete the pod
Dec 31 14:57:33.937: INFO: Waiting for pod client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da to disappear
Dec 31 14:57:33.952: INFO: Pod client-containers-77f465b5-d4fd-4d71-b515-c25c62d363da no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:57:33.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1059" for this suite.
Dec 31 14:57:39.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:57:40.323: INFO: namespace containers-1059 deletion completed in 6.363190547s

• [SLOW TEST:14.743 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:57:40.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7e2b17d4-7f28-4c2c-bed2-0802321e4e3f
STEP: Creating a pod to test consume secrets
Dec 31 14:57:40.522: INFO: Waiting up to 5m0s for pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1" in namespace "secrets-9543" to be "success or failure"
Dec 31 14:57:40.563: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.257917ms
Dec 31 14:57:42.750: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227133346s
Dec 31 14:57:44.757: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234578808s
Dec 31 14:57:46.764: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241276326s
Dec 31 14:57:48.773: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250817403s
Dec 31 14:57:50.781: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258358921s
STEP: Saw pod success
Dec 31 14:57:50.781: INFO: Pod "pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1" satisfied condition "success or failure"
Dec 31 14:57:50.785: INFO: Trying to get logs from node iruya-node pod pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1 container secret-volume-test: 
STEP: delete the pod
Dec 31 14:57:50.875: INFO: Waiting for pod pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1 to disappear
Dec 31 14:57:50.885: INFO: Pod pod-secrets-d5185dc5-6853-4a54-88bc-5834687b41c1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:57:50.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9543" for this suite.
Dec 31 14:57:57.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:57:57.141: INFO: namespace secrets-9543 deletion completed in 6.247040706s

• [SLOW TEST:16.818 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:57:57.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6851eafa-08d6-4319-832e-ba397a7d0625
STEP: Creating a pod to test consume configMaps
Dec 31 14:57:57.325: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076" in namespace "configmap-8142" to be "success or failure"
Dec 31 14:57:57.364: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076": Phase="Pending", Reason="", readiness=false. Elapsed: 38.824336ms
Dec 31 14:57:59.383: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057419229s
Dec 31 14:58:01.392: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067169553s
Dec 31 14:58:03.408: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082467784s
Dec 31 14:58:05.414: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089116872s
STEP: Saw pod success
Dec 31 14:58:05.414: INFO: Pod "pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076" satisfied condition "success or failure"
Dec 31 14:58:05.420: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076 container configmap-volume-test: 
STEP: delete the pod
Dec 31 14:58:05.498: INFO: Waiting for pod pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076 to disappear
Dec 31 14:58:05.511: INFO: Pod pod-configmaps-1a6dc01f-6bfb-4db6-be7a-6b6ab807a076 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:58:05.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8142" for this suite.
Dec 31 14:58:11.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:58:11.695: INFO: namespace configmap-8142 deletion completed in 6.17524284s

• [SLOW TEST:14.554 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:58:11.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-p9g5w in namespace proxy-3267
I1231 14:58:11.945445       8 runners.go:180] Created replication controller with name: proxy-service-p9g5w, namespace: proxy-3267, replica count: 1
I1231 14:58:12.996603       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:13.996969       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:14.997620       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:15.998297       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:16.998851       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:17.999325       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:18.999687       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1231 14:58:20.000457       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1231 14:58:21.001447       8 runners.go:180] proxy-service-p9g5w Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 31 14:58:21.039: INFO: setup took 9.198481626s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 31 14:58:21.058: INFO: (0) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 18.707405ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 19.208189ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 18.973658ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 19.501505ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 19.164018ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 19.934806ms)
Dec 31 14:58:21.059: INFO: (0) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 19.422851ms)
Dec 31 14:58:21.061: INFO: (0) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 21.231451ms)
Dec 31 14:58:21.061: INFO: (0) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 21.44308ms)
Dec 31 14:58:21.061: INFO: (0) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 21.852614ms)
Dec 31 14:58:21.062: INFO: (0) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 21.833603ms)
Dec 31 14:58:21.070: INFO: (0) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 31.177671ms)
Dec 31 14:58:21.072: INFO: (0) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 32.332097ms)
Dec 31 14:58:21.072: INFO: (0) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 31.833424ms)
Dec 31 14:58:21.072: INFO: (0) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 32.377506ms)
Dec 31 14:58:21.073: INFO: (0) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 8.225119ms)
Dec 31 14:58:21.084: INFO: (1) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 10.684376ms)
Dec 31 14:58:21.084: INFO: (1) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 11.040386ms)
Dec 31 14:58:21.085: INFO: (1) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 11.958501ms)
Dec 31 14:58:21.085: INFO: (1) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 12.393751ms)
Dec 31 14:58:21.085: INFO: (1) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 12.20759ms)
Dec 31 14:58:21.087: INFO: (1) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 13.828571ms)
Dec 31 14:58:21.087: INFO: (1) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 14.134899ms)
Dec 31 14:58:21.088: INFO: (1) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 14.813964ms)
Dec 31 14:58:21.089: INFO: (1) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 15.409793ms)
Dec 31 14:58:21.089: INFO: (1) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 15.522433ms)
Dec 31 14:58:21.089: INFO: (1) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 15.835041ms)
Dec 31 14:58:21.089: INFO: (1) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 15.546985ms)
Dec 31 14:58:21.089: INFO: (1) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 15.838661ms)
Dec 31 14:58:21.097: INFO: (2) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 6.840856ms)
Dec 31 14:58:21.097: INFO: (2) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 8.654049ms)
Dec 31 14:58:21.098: INFO: (2) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 7.983915ms)
Dec 31 14:58:21.098: INFO: (2) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 7.321945ms)
Dec 31 14:58:21.098: INFO: (2) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 8.700066ms)
Dec 31 14:58:21.099: INFO: (2) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 9.275763ms)
Dec 31 14:58:21.099: INFO: (2) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 9.165003ms)
Dec 31 14:58:21.099: INFO: (2) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 8.454075ms)
Dec 31 14:58:21.100: INFO: (2) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 9.212311ms)
Dec 31 14:58:21.101: INFO: (2) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 10.198527ms)
Dec 31 14:58:21.103: INFO: (2) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 13.418798ms)
Dec 31 14:58:21.103: INFO: (2) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 12.868393ms)
Dec 31 14:58:21.104: INFO: (2) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 13.427385ms)
Dec 31 14:58:21.110: INFO: (3) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 5.99203ms)
Dec 31 14:58:21.111: INFO: (3) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 7.238934ms)
Dec 31 14:58:21.111: INFO: (3) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 7.453606ms)
Dec 31 14:58:21.111: INFO: (3) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 7.622567ms)
Dec 31 14:58:21.111: INFO: (3) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 7.354817ms)
Dec 31 14:58:21.112: INFO: (3) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 7.432921ms)
Dec 31 14:58:21.112: INFO: (3) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 7.767817ms)
Dec 31 14:58:21.112: INFO: (3) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 8.372256ms)
Dec 31 14:58:21.114: INFO: (3) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 10.724147ms)
Dec 31 14:58:21.115: INFO: (3) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 11.594603ms)
Dec 31 14:58:21.116: INFO: (3) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 11.999547ms)
Dec 31 14:58:21.116: INFO: (3) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 11.947896ms)
Dec 31 14:58:21.117: INFO: (3) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 13.055583ms)
Dec 31 14:58:21.118: INFO: (3) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 13.494734ms)
Dec 31 14:58:21.121: INFO: (4) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 3.504416ms)
Dec 31 14:58:21.123: INFO: (4) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 5.361266ms)
Dec 31 14:58:21.123: INFO: (4) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 5.32287ms)
Dec 31 14:58:21.123: INFO: (4) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 5.624267ms)
Dec 31 14:58:21.124: INFO: (4) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 5.706463ms)
Dec 31 14:58:21.124: INFO: (4) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 6.246902ms)
Dec 31 14:58:21.125: INFO: (4) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 7.168349ms)
Dec 31 14:58:21.128: INFO: (4) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 10.308913ms)
Dec 31 14:58:21.129: INFO: (4) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 10.750875ms)
Dec 31 14:58:21.129: INFO: (4) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 12.376995ms)
Dec 31 14:58:21.131: INFO: (4) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 12.99553ms)
Dec 31 14:58:21.131: INFO: (4) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 13.566348ms)
Dec 31 14:58:21.132: INFO: (4) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 13.691397ms)
Dec 31 14:58:21.132: INFO: (4) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 14.047049ms)
Dec 31 14:58:21.133: INFO: (4) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 14.986305ms)
Dec 31 14:58:21.144: INFO: (5) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 10.945781ms)
Dec 31 14:58:21.145: INFO: (5) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 11.723773ms)
Dec 31 14:58:21.146: INFO: (5) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 12.89357ms)
Dec 31 14:58:21.146: INFO: (5) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 13.322639ms)
Dec 31 14:58:21.146: INFO: (5) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 13.07463ms)
Dec 31 14:58:21.147: INFO: (5) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 13.31888ms)
Dec 31 14:58:21.147: INFO: (5) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 13.329266ms)
Dec 31 14:58:21.148: INFO: (5) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 14.539774ms)
Dec 31 14:58:21.148: INFO: (5) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 14.339135ms)
Dec 31 14:58:21.148: INFO: (5) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 14.892285ms)
Dec 31 14:58:21.148: INFO: (5) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 14.552564ms)
Dec 31 14:58:21.152: INFO: (5) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 18.582388ms)
Dec 31 14:58:21.153: INFO: (5) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 19.432423ms)
Dec 31 14:58:21.166: INFO: (6) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 13.221345ms)
Dec 31 14:58:21.166: INFO: (6) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 13.603233ms)
Dec 31 14:58:21.167: INFO: (6) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 13.925093ms)
Dec 31 14:58:21.167: INFO: (6) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 13.650409ms)
Dec 31 14:58:21.167: INFO: (6) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 13.823215ms)
Dec 31 14:58:21.167: INFO: (6) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 13.706206ms)
Dec 31 14:58:21.169: INFO: (6) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 16.253852ms)
Dec 31 14:58:21.169: INFO: (6) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 16.24636ms)
Dec 31 14:58:21.170: INFO: (6) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 16.852625ms)
Dec 31 14:58:21.172: INFO: (6) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 9.657883ms)
Dec 31 14:58:21.190: INFO: (7) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 9.74004ms)
Dec 31 14:58:21.190: INFO: (7) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 9.77835ms)
Dec 31 14:58:21.190: INFO: (7) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 10.047617ms)
Dec 31 14:58:21.193: INFO: (7) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 12.795308ms)
Dec 31 14:58:21.193: INFO: (7) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 17.216675ms)
Dec 31 14:58:21.223: INFO: (8) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 16.833712ms)
Dec 31 14:58:21.223: INFO: (8) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 17.279916ms)
Dec 31 14:58:21.223: INFO: (8) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 17.221803ms)
Dec 31 14:58:21.223: INFO: (8) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 17.546845ms)
Dec 31 14:58:21.224: INFO: (8) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 15.551089ms)
Dec 31 14:58:21.247: INFO: (9) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 15.565326ms)
Dec 31 14:58:21.247: INFO: (9) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 15.443895ms)
Dec 31 14:58:21.267: INFO: (9) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 35.354356ms)
Dec 31 14:58:21.267: INFO: (9) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 35.994716ms)
Dec 31 14:58:21.268: INFO: (9) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 36.24616ms)
Dec 31 14:58:21.268: INFO: (9) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 36.296263ms)
Dec 31 14:58:21.268: INFO: (9) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 36.736212ms)
Dec 31 14:58:21.268: INFO: (9) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 36.412664ms)
Dec 31 14:58:21.269: INFO: (9) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 37.721301ms)
Dec 31 14:58:21.269: INFO: (9) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: ... (200; 37.721812ms)
Dec 31 14:58:21.269: INFO: (9) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 37.562084ms)
Dec 31 14:58:21.279: INFO: (10) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 9.594135ms)
Dec 31 14:58:21.279: INFO: (10) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 9.819401ms)
Dec 31 14:58:21.279: INFO: (10) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 9.950947ms)
Dec 31 14:58:21.283: INFO: (10) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 13.395685ms)
Dec 31 14:58:21.283: INFO: (10) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 15.002097ms)
Dec 31 14:58:21.285: INFO: (10) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 15.402732ms)
Dec 31 14:58:21.285: INFO: (10) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 15.300895ms)
Dec 31 14:58:21.285: INFO: (10) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 15.78261ms)
Dec 31 14:58:21.285: INFO: (10) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 15.404121ms)
Dec 31 14:58:21.285: INFO: (10) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 15.572218ms)
Dec 31 14:58:21.287: INFO: (10) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 17.858517ms)
Dec 31 14:58:21.292: INFO: (11) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 7.982639ms)
Dec 31 14:58:21.297: INFO: (11) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 9.314132ms)
Dec 31 14:58:21.299: INFO: (11) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 11.734476ms)
Dec 31 14:58:21.299: INFO: (11) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 12.3198ms)
Dec 31 14:58:21.300: INFO: (11) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 12.349328ms)
Dec 31 14:58:21.300: INFO: (11) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 12.244624ms)
Dec 31 14:58:21.300: INFO: (11) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 12.720815ms)
Dec 31 14:58:21.300: INFO: (11) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 12.708899ms)
Dec 31 14:58:21.300: INFO: (11) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 12.878922ms)
Dec 31 14:58:21.301: INFO: (11) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 13.171425ms)
Dec 31 14:58:21.301: INFO: (11) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 13.479767ms)
Dec 31 14:58:21.305: INFO: (12) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 3.983644ms)
Dec 31 14:58:21.312: INFO: (12) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 18.26726ms)
Dec 31 14:58:21.320: INFO: (12) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 18.939249ms)
Dec 31 14:58:21.320: INFO: (12) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 19.334973ms)
Dec 31 14:58:21.321: INFO: (12) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 19.337444ms)
Dec 31 14:58:21.321: INFO: (12) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 20.006254ms)
Dec 31 14:58:21.322: INFO: (12) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 20.484541ms)
Dec 31 14:58:21.325: INFO: (12) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 23.460288ms)
Dec 31 14:58:21.325: INFO: (12) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 23.951776ms)
Dec 31 14:58:21.325: INFO: (12) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 23.964207ms)
Dec 31 14:58:21.326: INFO: (12) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 24.309441ms)
Dec 31 14:58:21.326: INFO: (12) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 24.82075ms)
Dec 31 14:58:21.326: INFO: (12) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 24.899531ms)
Dec 31 14:58:21.328: INFO: (12) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 26.806558ms)
Dec 31 14:58:21.341: INFO: (13) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 12.956191ms)
Dec 31 14:58:21.341: INFO: (13) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 13.079066ms)
Dec 31 14:58:21.343: INFO: (13) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 14.613427ms)
Dec 31 14:58:21.343: INFO: (13) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 15.265556ms)
Dec 31 14:58:21.343: INFO: (13) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 15.480691ms)
Dec 31 14:58:21.343: INFO: (13) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 15.682175ms)
Dec 31 14:58:21.343: INFO: (13) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 16.317222ms)
Dec 31 14:58:21.345: INFO: (13) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 16.580521ms)
Dec 31 14:58:21.349: INFO: (13) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 20.947019ms)
Dec 31 14:58:21.350: INFO: (13) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 21.498457ms)
Dec 31 14:58:21.350: INFO: (13) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 21.620189ms)
Dec 31 14:58:21.358: INFO: (14) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 7.600514ms)
Dec 31 14:58:21.359: INFO: (14) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 8.735171ms)
Dec 31 14:58:21.359: INFO: (14) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 9.120211ms)
Dec 31 14:58:21.365: INFO: (14) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 14.446387ms)
Dec 31 14:58:21.365: INFO: (14) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 15.070995ms)
Dec 31 14:58:21.365: INFO: (14) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 15.158236ms)
Dec 31 14:58:21.366: INFO: (14) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 15.278055ms)
Dec 31 14:58:21.366: INFO: (14) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 16.643464ms)
Dec 31 14:58:21.367: INFO: (14) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 16.79673ms)
Dec 31 14:58:21.367: INFO: (14) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 16.417052ms)
Dec 31 14:58:21.367: INFO: (14) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 16.576952ms)
Dec 31 14:58:21.367: INFO: (14) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 16.983267ms)
Dec 31 14:58:21.367: INFO: (14) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 12.442542ms)
Dec 31 14:58:21.380: INFO: (15) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 12.057522ms)
Dec 31 14:58:21.380: INFO: (15) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 12.496122ms)
Dec 31 14:58:21.380: INFO: (15) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test<... (200; 12.456695ms)
Dec 31 14:58:21.386: INFO: (16) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 6.068402ms)
Dec 31 14:58:21.387: INFO: (16) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 6.685655ms)
Dec 31 14:58:21.387: INFO: (16) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 7.013692ms)
Dec 31 14:58:21.388: INFO: (16) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 7.475993ms)
Dec 31 14:58:21.389: INFO: (16) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 8.176233ms)
Dec 31 14:58:21.389: INFO: (16) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 8.247752ms)
Dec 31 14:58:21.389: INFO: (16) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 8.760759ms)
Dec 31 14:58:21.389: INFO: (16) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 9.043034ms)
Dec 31 14:58:21.389: INFO: (16) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 8.728949ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 9.482968ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 9.584697ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 9.720717ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 9.763833ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 9.796408ms)
Dec 31 14:58:21.390: INFO: (16) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 10.031513ms)
Dec 31 14:58:21.397: INFO: (17) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 6.023522ms)
Dec 31 14:58:21.397: INFO: (17) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 6.144532ms)
Dec 31 14:58:21.397: INFO: (17) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 6.081379ms)
Dec 31 14:58:21.400: INFO: (17) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 9.738058ms)
Dec 31 14:58:21.400: INFO: (17) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 9.926516ms)
Dec 31 14:58:21.401: INFO: (17) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 9.730025ms)
Dec 31 14:58:21.401: INFO: (17) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 9.940685ms)
Dec 31 14:58:21.401: INFO: (17) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns/proxy/: test (200; 10.033809ms)
Dec 31 14:58:21.401: INFO: (17) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 10.743239ms)
Dec 31 14:58:21.414: INFO: (18) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 11.031691ms)
Dec 31 14:58:21.414: INFO: (18) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 11.429464ms)
Dec 31 14:58:21.414: INFO: (18) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 11.835555ms)
Dec 31 14:58:21.415: INFO: (18) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 12.074354ms)
Dec 31 14:58:21.415: INFO: (18) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 11.96988ms)
Dec 31 14:58:21.415: INFO: (18) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 12.199571ms)
Dec 31 14:58:21.415: INFO: (18) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 12.623441ms)
Dec 31 14:58:21.415: INFO: (18) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 12.162696ms)
Dec 31 14:58:21.418: INFO: (18) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 14.895068ms)
Dec 31 14:58:21.418: INFO: (18) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 14.882074ms)
Dec 31 14:58:21.418: INFO: (18) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 15.783311ms)
Dec 31 14:58:21.419: INFO: (18) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 16.216347ms)
Dec 31 14:58:21.423: INFO: (18) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 20.754067ms)
Dec 31 14:58:21.432: INFO: (19) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 8.001998ms)
Dec 31 14:58:21.433: INFO: (19) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:1080/proxy/: ... (200; 9.530239ms)
Dec 31 14:58:21.433: INFO: (19) /api/v1/namespaces/proxy-3267/pods/http:proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 9.68627ms)
Dec 31 14:58:21.433: INFO: (19) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:443/proxy/: test (200; 10.186538ms)
Dec 31 14:58:21.434: INFO: (19) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:460/proxy/: tls baz (200; 10.166355ms)
Dec 31 14:58:21.434: INFO: (19) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:1080/proxy/: test<... (200; 10.622927ms)
Dec 31 14:58:21.435: INFO: (19) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname2/proxy/: bar (200; 11.941004ms)
Dec 31 14:58:21.436: INFO: (19) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname2/proxy/: tls qux (200; 11.891739ms)
Dec 31 14:58:21.436: INFO: (19) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname2/proxy/: bar (200; 12.208087ms)
Dec 31 14:58:21.438: INFO: (19) /api/v1/namespaces/proxy-3267/services/https:proxy-service-p9g5w:tlsportname1/proxy/: tls baz (200; 13.95126ms)
Dec 31 14:58:21.438: INFO: (19) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:160/proxy/: foo (200; 14.299426ms)
Dec 31 14:58:21.438: INFO: (19) /api/v1/namespaces/proxy-3267/services/proxy-service-p9g5w:portname1/proxy/: foo (200; 14.042028ms)
Dec 31 14:58:21.438: INFO: (19) /api/v1/namespaces/proxy-3267/pods/https:proxy-service-p9g5w-6z7ns:462/proxy/: tls qux (200; 14.276855ms)
Dec 31 14:58:21.438: INFO: (19) /api/v1/namespaces/proxy-3267/services/http:proxy-service-p9g5w:portname1/proxy/: foo (200; 14.89982ms)
Dec 31 14:58:21.439: INFO: (19) /api/v1/namespaces/proxy-3267/pods/proxy-service-p9g5w-6z7ns:162/proxy/: bar (200; 15.691247ms)
STEP: deleting ReplicationController proxy-service-p9g5w in namespace proxy-3267, will wait for the garbage collector to delete the pods
Dec 31 14:58:21.505: INFO: Deleting ReplicationController proxy-service-p9g5w took: 12.995351ms
Dec 31 14:58:21.806: INFO: Terminating ReplicationController proxy-service-p9g5w pods took: 300.456613ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:58:27.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3267" for this suite.
Dec 31 14:58:33.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:58:33.139: INFO: namespace proxy-3267 deletion completed in 6.123992884s

• [SLOW TEST:21.443 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:58:33.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-48dfb094-987e-416d-a33e-190a2d4e0e39
STEP: Creating a pod to test consume secrets
Dec 31 14:58:33.258: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6" in namespace "projected-9698" to be "success or failure"
Dec 31 14:58:33.263: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596962ms
Dec 31 14:58:35.275: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016076476s
Dec 31 14:58:37.283: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024891619s
Dec 31 14:58:39.310: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050953762s
Dec 31 14:58:41.324: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06502438s
STEP: Saw pod success
Dec 31 14:58:41.324: INFO: Pod "pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6" satisfied condition "success or failure"
Dec 31 14:58:41.330: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6 container projected-secret-volume-test: 
STEP: delete the pod
Dec 31 14:58:41.550: INFO: Waiting for pod pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6 to disappear
Dec 31 14:58:41.570: INFO: Pod pod-projected-secrets-4f6a5851-d4c1-442d-ac30-bf1b5cd69fe6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:58:41.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9698" for this suite.
Dec 31 14:58:47.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:58:48.014: INFO: namespace projected-9698 deletion completed in 6.430473111s

• [SLOW TEST:14.874 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:58:48.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 14:58:48.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4923'
Dec 31 14:58:48.344: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 14:58:48.345: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Dec 31 14:58:52.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4923'
Dec 31 14:58:52.612: INFO: stderr: ""
Dec 31 14:58:52.612: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:58:52.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4923" for this suite.
Dec 31 14:58:58.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:58:58.835: INFO: namespace kubectl-4923 deletion completed in 6.15395973s

• [SLOW TEST:10.820 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:58:58.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 31 14:58:59.012: INFO: Waiting up to 5m0s for pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f" in namespace "emptydir-9965" to be "success or failure"
Dec 31 14:58:59.023: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.81431ms
Dec 31 14:59:01.038: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025143804s
Dec 31 14:59:03.051: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038768842s
Dec 31 14:59:05.058: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045597981s
Dec 31 14:59:07.070: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057296565s
STEP: Saw pod success
Dec 31 14:59:07.070: INFO: Pod "pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f" satisfied condition "success or failure"
Dec 31 14:59:07.075: INFO: Trying to get logs from node iruya-node pod pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f container test-container: 
STEP: delete the pod
Dec 31 14:59:07.138: INFO: Waiting for pod pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f to disappear
Dec 31 14:59:07.143: INFO: Pod pod-9fe61214-ea85-4c23-8dcc-6b9ed945678f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:59:07.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9965" for this suite.
Dec 31 14:59:13.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:59:13.359: INFO: namespace emptydir-9965 deletion completed in 6.20839161s

• [SLOW TEST:14.524 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:59:13.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 31 14:59:13.635: INFO: Waiting up to 5m0s for pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1" in namespace "emptydir-2989" to be "success or failure"
Dec 31 14:59:13.648: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.298806ms
Dec 31 14:59:15.659: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023256402s
Dec 31 14:59:17.666: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030889044s
Dec 31 14:59:19.684: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048723068s
Dec 31 14:59:21.788: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.152473566s
STEP: Saw pod success
Dec 31 14:59:21.788: INFO: Pod "pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1" satisfied condition "success or failure"
Dec 31 14:59:21.798: INFO: Trying to get logs from node iruya-node pod pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1 container test-container: 
STEP: delete the pod
Dec 31 14:59:21.883: INFO: Waiting for pod pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1 to disappear
Dec 31 14:59:21.997: INFO: Pod pod-b71cd365-8fc8-4434-bf9d-c6df7a550cc1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 14:59:21.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2989" for this suite.
Dec 31 14:59:28.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 14:59:28.240: INFO: namespace emptydir-2989 deletion completed in 6.230533621s

• [SLOW TEST:14.880 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 14:59:28.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f3fea4de-95e6-49e7-a89b-9f6bb602057a in namespace container-probe-8116
Dec 31 14:59:36.448: INFO: Started pod liveness-f3fea4de-95e6-49e7-a89b-9f6bb602057a in namespace container-probe-8116
STEP: checking the pod's current state and verifying that restartCount is present
Dec 31 14:59:36.454: INFO: Initial restart count of pod liveness-f3fea4de-95e6-49e7-a89b-9f6bb602057a is 0
Dec 31 15:00:00.744: INFO: Restart count of pod container-probe-8116/liveness-f3fea4de-95e6-49e7-a89b-9f6bb602057a is now 1 (24.290577831s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:00:00.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8116" for this suite.
Dec 31 15:00:06.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:00:07.046: INFO: namespace container-probe-8116 deletion completed in 6.234218225s

• [SLOW TEST:38.806 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:00:07.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1231 15:00:48.577335       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 15:00:48.577: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:00:48.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-187" for this suite.
Dec 31 15:01:06.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:01:06.767: INFO: namespace gc-187 deletion completed in 18.17682554s

• [SLOW TEST:59.719 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:01:06.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:01:16.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-814" for this suite.
Dec 31 15:01:38.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:01:38.291: INFO: namespace replication-controller-814 deletion completed in 22.215095852s

• [SLOW TEST:31.522 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:01:38.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Dec 31 15:01:38.445: INFO: Waiting up to 5m0s for pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5" in namespace "containers-5234" to be "success or failure"
Dec 31 15:01:38.500: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.38579ms
Dec 31 15:01:40.515: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068926936s
Dec 31 15:01:42.568: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122234741s
Dec 31 15:01:44.609: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.163618173s
Dec 31 15:01:46.622: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17621942s
Dec 31 15:01:48.632: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186250608s
STEP: Saw pod success
Dec 31 15:01:48.632: INFO: Pod "client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5" satisfied condition "success or failure"
Dec 31 15:01:48.636: INFO: Trying to get logs from node iruya-node pod client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5 container test-container: 
STEP: delete the pod
Dec 31 15:01:48.808: INFO: Waiting for pod client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5 to disappear
Dec 31 15:01:48.814: INFO: Pod client-containers-1d205998-48f9-4c3d-bbe5-e1a4e58d1bd5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:01:48.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5234" for this suite.
Dec 31 15:01:54.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:01:55.030: INFO: namespace containers-5234 deletion completed in 6.204949421s

• [SLOW TEST:16.739 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:01:55.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5427
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Dec 31 15:01:55.193: INFO: Found 0 stateful pods, waiting for 3
Dec 31 15:02:05.949: INFO: Found 2 stateful pods, waiting for 3
Dec 31 15:02:15.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 15:02:15.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 15:02:15.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 31 15:02:25.206: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 15:02:25.206: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 15:02:25.206: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 31 15:02:25.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5427 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 15:02:27.733: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 15:02:27.734: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 15:02:27.734: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Dec 31 15:02:37.797: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 31 15:02:47.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5427 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 15:02:48.375: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 15:02:48.375: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 15:02:48.375: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 15:02:58.426: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:02:58.426: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 15:02:58.427: INFO: Waiting for Pod statefulset-5427/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 15:03:08.446: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:03:08.446: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 15:03:08.446: INFO: Waiting for Pod statefulset-5427/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 15:03:18.455: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:03:18.456: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Dec 31 15:03:28.452: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 31 15:03:38.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5427 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Dec 31 15:03:39.181: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Dec 31 15:03:39.181: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Dec 31 15:03:39.181: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Dec 31 15:03:49.241: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 31 15:03:59.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5427 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Dec 31 15:03:59.912: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Dec 31 15:03:59.912: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Dec 31 15:03:59.912: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Dec 31 15:04:09.948: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:04:09.948: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:09.948: INFO: Waiting for Pod statefulset-5427/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:09.948: INFO: Waiting for Pod statefulset-5427/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:19.994: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:04:19.994: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:19.994: INFO: Waiting for Pod statefulset-5427/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:29.983: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:04:29.983: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:29.983: INFO: Waiting for Pod statefulset-5427/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:40.116: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:04:40.116: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:49.962: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
Dec 31 15:04:49.963: INFO: Waiting for Pod statefulset-5427/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Dec 31 15:04:59.961: INFO: Waiting for StatefulSet statefulset-5427/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Dec 31 15:05:09.968: INFO: Deleting all statefulset in ns statefulset-5427
Dec 31 15:05:09.975: INFO: Scaling statefulset ss2 to 0
Dec 31 15:05:40.012: INFO: Waiting for statefulset status.replicas updated to 0
Dec 31 15:05:40.018: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:05:40.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5427" for this suite.
Dec 31 15:05:48.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:05:48.296: INFO: namespace statefulset-5427 deletion completed in 8.220401185s

• [SLOW TEST:233.262 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:05:48.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Dec 31 15:05:48.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2911'
Dec 31 15:05:48.621: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 31 15:05:48.622: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Dec 31 15:05:48.650: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-l8c9f]
Dec 31 15:05:48.651: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-l8c9f" in namespace "kubectl-2911" to be "running and ready"
Dec 31 15:05:48.816: INFO: Pod "e2e-test-nginx-rc-l8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 165.084063ms
Dec 31 15:05:50.833: INFO: Pod "e2e-test-nginx-rc-l8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182339778s
Dec 31 15:05:52.851: INFO: Pod "e2e-test-nginx-rc-l8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200512742s
Dec 31 15:05:54.859: INFO: Pod "e2e-test-nginx-rc-l8c9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.208841326s
Dec 31 15:05:56.873: INFO: Pod "e2e-test-nginx-rc-l8c9f": Phase="Running", Reason="", readiness=true. Elapsed: 8.221975983s
Dec 31 15:05:56.873: INFO: Pod "e2e-test-nginx-rc-l8c9f" satisfied condition "running and ready"
Dec 31 15:05:56.873: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-l8c9f]
Dec 31 15:05:56.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2911'
Dec 31 15:05:57.100: INFO: stderr: ""
Dec 31 15:05:57.100: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Dec 31 15:05:57.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2911'
Dec 31 15:05:57.195: INFO: stderr: ""
Dec 31 15:05:57.195: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:05:57.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2911" for this suite.
Dec 31 15:06:19.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:06:19.447: INFO: namespace kubectl-2911 deletion completed in 22.245390153s

• [SLOW TEST:31.147 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:06:19.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Dec 31 15:06:19.699: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:06:34.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5832" for this suite.
Dec 31 15:06:40.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:06:40.623: INFO: namespace init-container-5832 deletion completed in 6.253984998s

• [SLOW TEST:21.176 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:06:40.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Dec 31 15:06:40.770: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 31 15:06:40.786: INFO: Waiting for terminating namespaces to be deleted...
Dec 31 15:06:40.789: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Dec 31 15:06:40.800: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Dec 31 15:06:40.800: INFO: 	Container weave ready: true, restart count 0
Dec 31 15:06:40.800: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 15:06:40.800: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.800: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 15:06:40.800: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Dec 31 15:06:40.812: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container kube-controller-manager ready: true, restart count 14
Dec 31 15:06:40.812: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 31 15:06:40.812: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container kube-apiserver ready: true, restart count 0
Dec 31 15:06:40.812: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container kube-scheduler ready: true, restart count 10
Dec 31 15:06:40.812: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container coredns ready: true, restart count 0
Dec 31 15:06:40.812: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container etcd ready: true, restart count 0
Dec 31 15:06:40.812: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container weave ready: true, restart count 0
Dec 31 15:06:40.812: INFO: 	Container weave-npc ready: true, restart count 0
Dec 31 15:06:40.812: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Dec 31 15:06:40.812: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c9c38019-1b41-4d3e-bd62-76bc887056b2 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c9c38019-1b41-4d3e-bd62-76bc887056b2 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c9c38019-1b41-4d3e-bd62-76bc887056b2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:06:59.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4207" for this suite.
Dec 31 15:07:19.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:07:19.646: INFO: namespace sched-pred-4207 deletion completed in 20.21868154s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:39.022 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:07:19.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a3545c91-f64a-452d-bc13-5ce9af00c4bc
STEP: Creating a pod to test consume secrets
Dec 31 15:07:19.876: INFO: Waiting up to 5m0s for pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c" in namespace "secrets-1341" to be "success or failure"
Dec 31 15:07:19.885: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.705844ms
Dec 31 15:07:21.894: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017436406s
Dec 31 15:07:23.918: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041036849s
Dec 31 15:07:25.928: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051025025s
Dec 31 15:07:27.936: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05904547s
STEP: Saw pod success
Dec 31 15:07:27.936: INFO: Pod "pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c" satisfied condition "success or failure"
Dec 31 15:07:27.942: INFO: Trying to get logs from node iruya-node pod pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c container secret-volume-test: 
STEP: delete the pod
Dec 31 15:07:28.003: INFO: Waiting for pod pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c to disappear
Dec 31 15:07:28.020: INFO: Pod pod-secrets-096e331f-a86b-4724-93a2-780f5d0a2b0c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:07:28.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1341" for this suite.
Dec 31 15:07:34.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:07:34.453: INFO: namespace secrets-1341 deletion completed in 6.31197199s
STEP: Destroying namespace "secret-namespace-9241" for this suite.
Dec 31 15:07:40.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:07:40.716: INFO: namespace secret-namespace-9241 deletion completed in 6.263527972s

• [SLOW TEST:21.068 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:07:40.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 31 15:07:48.385: INFO: 0 pods remaining
Dec 31 15:07:48.385: INFO: 0 pods has nil DeletionTimestamp
Dec 31 15:07:48.385: INFO: 
STEP: Gathering metrics
W1231 15:07:48.915661       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 15:07:48.915: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:07:48.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8126" for this suite.
Dec 31 15:08:03.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:08:03.241: INFO: namespace gc-8126 deletion completed in 14.318748109s

• [SLOW TEST:22.524 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:08:03.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-b9c925e3-e627-4ebe-bf34-57dca569b6b4
STEP: Creating a pod to test consume secrets
Dec 31 15:08:03.497: INFO: Waiting up to 5m0s for pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7" in namespace "secrets-4380" to be "success or failure"
Dec 31 15:08:03.563: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7": Phase="Pending", Reason="", readiness=false. Elapsed: 65.411268ms
Dec 31 15:08:05.570: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072393789s
Dec 31 15:08:07.577: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079774698s
Dec 31 15:08:09.599: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101185391s
Dec 31 15:08:11.613: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115733927s
STEP: Saw pod success
Dec 31 15:08:11.614: INFO: Pod "pod-secrets-408a989a-b413-402a-b078-d6c657a779c7" satisfied condition "success or failure"
Dec 31 15:08:11.625: INFO: Trying to get logs from node iruya-node pod pod-secrets-408a989a-b413-402a-b078-d6c657a779c7 container secret-volume-test: 
STEP: delete the pod
Dec 31 15:08:11.727: INFO: Waiting for pod pod-secrets-408a989a-b413-402a-b078-d6c657a779c7 to disappear
Dec 31 15:08:11.733: INFO: Pod pod-secrets-408a989a-b413-402a-b078-d6c657a779c7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:08:11.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4380" for this suite.
Dec 31 15:08:17.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:08:17.954: INFO: namespace secrets-4380 deletion completed in 6.207751665s

• [SLOW TEST:14.712 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:08:17.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 31 15:08:18.067: INFO: Waiting up to 5m0s for pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be" in namespace "emptydir-630" to be "success or failure"
Dec 31 15:08:18.075: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Pending", Reason="", readiness=false. Elapsed: 7.910641ms
Dec 31 15:08:20.083: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015569929s
Dec 31 15:08:22.131: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063616838s
Dec 31 15:08:24.141: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073384024s
Dec 31 15:08:26.154: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086232301s
Dec 31 15:08:28.164: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09651513s
STEP: Saw pod success
Dec 31 15:08:28.164: INFO: Pod "pod-a54a60ba-df6b-4bef-acdc-44a05df679be" satisfied condition "success or failure"
Dec 31 15:08:28.171: INFO: Trying to get logs from node iruya-node pod pod-a54a60ba-df6b-4bef-acdc-44a05df679be container test-container: 
STEP: delete the pod
Dec 31 15:08:28.296: INFO: Waiting for pod pod-a54a60ba-df6b-4bef-acdc-44a05df679be to disappear
Dec 31 15:08:28.304: INFO: Pod pod-a54a60ba-df6b-4bef-acdc-44a05df679be no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:08:28.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-630" for this suite.
Dec 31 15:08:34.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:08:34.444: INFO: namespace emptydir-630 deletion completed in 6.128474251s

• [SLOW TEST:16.489 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:08:34.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 31 15:08:34.532: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:08:56.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4861" for this suite.
Dec 31 15:09:02.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:09:02.777: INFO: namespace pods-4861 deletion completed in 6.130959633s

• [SLOW TEST:28.332 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:09:02.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-f7a0db40-a707-4413-bbe2-2ca230aeae6f
STEP: Creating a pod to test consume configMaps
Dec 31 15:09:03.927: INFO: Waiting up to 5m0s for pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99" in namespace "configmap-9673" to be "success or failure"
Dec 31 15:09:04.087: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Pending", Reason="", readiness=false. Elapsed: 159.596926ms
Dec 31 15:09:06.095: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167721167s
Dec 31 15:09:08.110: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182389475s
Dec 31 15:09:10.127: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.199620713s
Dec 31 15:09:12.139: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211075502s
Dec 31 15:09:14.145: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.217072836s
STEP: Saw pod success
Dec 31 15:09:14.145: INFO: Pod "pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99" satisfied condition "success or failure"
Dec 31 15:09:14.148: INFO: Trying to get logs from node iruya-node pod pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99 container configmap-volume-test: 
STEP: delete the pod
Dec 31 15:09:14.286: INFO: Waiting for pod pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99 to disappear
Dec 31 15:09:14.298: INFO: Pod pod-configmaps-23892ebd-eaeb-4330-bec7-4c10931afc99 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:09:14.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9673" for this suite.
Dec 31 15:09:20.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:09:20.582: INFO: namespace configmap-9673 deletion completed in 6.278412619s

• [SLOW TEST:17.804 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:09:20.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-85d3f1da-0caf-4ae0-b49a-64723a088f98
STEP: Creating a pod to test consume configMaps
Dec 31 15:09:20.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f" in namespace "projected-5866" to be "success or failure"
Dec 31 15:09:20.755: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.648477ms
Dec 31 15:09:22.769: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037200472s
Dec 31 15:09:24.777: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045160306s
Dec 31 15:09:26.788: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056688442s
Dec 31 15:09:28.807: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074988517s
STEP: Saw pod success
Dec 31 15:09:28.807: INFO: Pod "pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f" satisfied condition "success or failure"
Dec 31 15:09:28.812: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 31 15:09:28.902: INFO: Waiting for pod pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f to disappear
Dec 31 15:09:28.906: INFO: Pod pod-projected-configmaps-b68a82be-cb1e-4f91-ab0c-f70163b19b8f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:09:28.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5866" for this suite.
Dec 31 15:09:36.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:09:37.071: INFO: namespace projected-5866 deletion completed in 8.15294819s

• [SLOW TEST:16.487 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:09:37.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2792
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 31 15:09:37.146: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 31 15:10:15.630: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2792 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 15:10:15.630: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 15:10:16.327: INFO: Found all expected endpoints: [netserver-0]
Dec 31 15:10:16.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2792 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 31 15:10:16.339: INFO: >>> kubeConfig: /root/.kube/config
Dec 31 15:10:16.717: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:10:16.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2792" for this suite.
Dec 31 15:10:44.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:10:45.061: INFO: namespace pod-network-test-2792 deletion completed in 28.331931396s

• [SLOW TEST:67.990 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:10:45.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-ffmf
STEP: Creating a pod to test atomic-volume-subpath
Dec 31 15:10:45.214: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ffmf" in namespace "subpath-5431" to be "success or failure"
Dec 31 15:10:45.230: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.443501ms
Dec 31 15:10:47.249: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035380763s
Dec 31 15:10:49.271: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057078457s
Dec 31 15:10:51.288: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074179636s
Dec 31 15:10:53.300: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.086619744s
Dec 31 15:10:55.325: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 10.111192261s
Dec 31 15:10:57.337: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 12.122768181s
Dec 31 15:10:59.348: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 14.134146308s
Dec 31 15:11:01.358: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 16.14389808s
Dec 31 15:11:03.367: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 18.153450514s
Dec 31 15:11:05.376: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 20.162481177s
Dec 31 15:11:07.388: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 22.173935102s
Dec 31 15:11:09.396: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 24.182389843s
Dec 31 15:11:11.405: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 26.191670519s
Dec 31 15:11:13.414: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Running", Reason="", readiness=true. Elapsed: 28.199832331s
Dec 31 15:11:15.495: INFO: Pod "pod-subpath-test-downwardapi-ffmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.281351106s
STEP: Saw pod success
Dec 31 15:11:15.495: INFO: Pod "pod-subpath-test-downwardapi-ffmf" satisfied condition "success or failure"
Dec 31 15:11:15.509: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-ffmf container test-container-subpath-downwardapi-ffmf: 
STEP: delete the pod
Dec 31 15:11:15.564: INFO: Waiting for pod pod-subpath-test-downwardapi-ffmf to disappear
Dec 31 15:11:15.570: INFO: Pod pod-subpath-test-downwardapi-ffmf no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ffmf
Dec 31 15:11:15.570: INFO: Deleting pod "pod-subpath-test-downwardapi-ffmf" in namespace "subpath-5431"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:11:15.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5431" for this suite.
Dec 31 15:11:21.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:11:21.885: INFO: namespace subpath-5431 deletion completed in 6.29969551s

• [SLOW TEST:36.823 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:11:21.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 15:11:21.964: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda" in namespace "downward-api-2249" to be "success or failure"
Dec 31 15:11:21.985: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda": Phase="Pending", Reason="", readiness=false. Elapsed: 20.441838ms
Dec 31 15:11:24.003: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038361945s
Dec 31 15:11:26.060: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095220274s
Dec 31 15:11:28.067: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.102834426s
Dec 31 15:11:30.079: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114949325s
STEP: Saw pod success
Dec 31 15:11:30.080: INFO: Pod "downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda" satisfied condition "success or failure"
Dec 31 15:11:30.084: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda container client-container: 
STEP: delete the pod
Dec 31 15:11:30.194: INFO: Waiting for pod downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda to disappear
Dec 31 15:11:30.200: INFO: Pod downwardapi-volume-73db1338-1829-4b6b-90c5-f307dce7dbda no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:11:30.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2249" for this suite.
Dec 31 15:11:36.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:11:36.382: INFO: namespace downward-api-2249 deletion completed in 6.174667105s

• [SLOW TEST:14.495 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:11:36.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Dec 31 15:11:36.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 31 15:11:36.718: INFO: stderr: ""
Dec 31 15:11:36.718: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:11:36.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6392" for this suite.
Dec 31 15:11:42.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:11:42.964: INFO: namespace kubectl-6392 deletion completed in 6.23789637s

• [SLOW TEST:6.581 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:11:42.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Dec 31 15:11:43.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e" in namespace "projected-6212" to be "success or failure"
Dec 31 15:11:43.299: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e": Phase="Pending", Reason="", readiness=false. Elapsed: 224.473216ms
Dec 31 15:11:45.324: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249393948s
Dec 31 15:11:47.378: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303530186s
Dec 31 15:11:49.386: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.311592198s
Dec 31 15:11:51.394: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.319697348s
STEP: Saw pod success
Dec 31 15:11:51.394: INFO: Pod "downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e" satisfied condition "success or failure"
Dec 31 15:11:51.400: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e container client-container: 
STEP: delete the pod
Dec 31 15:11:51.462: INFO: Waiting for pod downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e to disappear
Dec 31 15:11:51.529: INFO: Pod downwardapi-volume-b3933cfd-98ba-4dd2-9e8a-15db85cb528e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:11:51.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6212" for this suite.
Dec 31 15:11:57.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:11:57.753: INFO: namespace projected-6212 deletion completed in 6.204958522s

• [SLOW TEST:14.788 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Dec 31 15:11:57.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 1 pods
STEP: Gathering metrics
W1231 15:12:01.576949       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 31 15:12:01.577: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Dec 31 15:12:01.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1505" for this suite.
Dec 31 15:12:07.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 31 15:12:07.927: INFO: namespace gc-1505 deletion completed in 6.343792073s

• [SLOW TEST:10.173 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSDec 31 15:12:07.927: INFO: Running AfterSuite actions on all nodes
Dec 31 15:12:07.927: INFO: Running AfterSuite actions on node 1
Dec 31 15:12:07.927: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8157.855 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS