I0205 12:56:14.171775 8 e2e.go:243] Starting e2e run "70ce5e23-5821-45a6-b3e0-73ae3990110b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580907373 - Will randomize all specs Will run 215 of 4412 specs Feb 5 12:56:14.416: INFO: >>> kubeConfig: /root/.kube/config Feb 5 12:56:14.421: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 5 12:56:14.457: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 5 12:56:14.513: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 5 12:56:14.513: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 5 12:56:14.513: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 5 12:56:14.525: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 5 12:56:14.525: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 5 12:56:14.525: INFO: e2e test version: v1.15.7 Feb 5 12:56:14.527: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:56:14.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Feb 5 12:56:14.665: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0205 12:56:18.025148 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 12:56:18.025: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:56:18.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5759" for this suite. Feb 5 12:56:24.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:56:24.132: INFO: namespace gc-5759 deletion completed in 6.102027778s • [SLOW TEST:9.604 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:56:24.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 5 12:56:24.258: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7753,SelfLink:/api/v1/namespaces/watch-7753/configmaps/e2e-watch-test-resource-version,UID:91ea0616-72e3-4340-a9c5-4a604ce7a33b,ResourceVersion:23188932,Generation:0,CreationTimestamp:2020-02-05 12:56:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 5 12:56:24.258: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7753,SelfLink:/api/v1/namespaces/watch-7753/configmaps/e2e-watch-test-resource-version,UID:91ea0616-72e3-4340-a9c5-4a604ce7a33b,ResourceVersion:23188933,Generation:0,CreationTimestamp:2020-02-05 12:56:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:56:24.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7753" for this suite. Feb 5 12:56:30.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:56:30.378: INFO: namespace watch-7753 deletion completed in 6.114300112s • [SLOW TEST:6.246 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:56:30.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ccedcbcd-0f80-495c-b926-f8ec15fbda11 STEP: Creating a pod to test consume configMaps Feb 5 12:56:30.531: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c" in namespace "projected-4708" to be "success or failure" Feb 5 12:56:30.547: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.867012ms Feb 5 12:56:32.566: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034923781s Feb 5 12:56:34.584: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053401391s Feb 5 12:56:36.595: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06446596s Feb 5 12:56:38.603: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072513325s Feb 5 12:56:40.625: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093944021s STEP: Saw pod success Feb 5 12:56:40.625: INFO: Pod "pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c" satisfied condition "success or failure" Feb 5 12:56:40.632: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c container projected-configmap-volume-test: STEP: delete the pod Feb 5 12:56:40.717: INFO: Waiting for pod pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c to disappear Feb 5 12:56:40.722: INFO: Pod pod-projected-configmaps-380e801b-a8b2-4351-97b8-5aad4f6ca49c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:56:40.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4708" for this suite. Feb 5 12:56:46.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:56:46.850: INFO: namespace projected-4708 deletion completed in 6.120787508s • [SLOW TEST:16.472 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:56:46.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-gf6t STEP: Creating a pod to test atomic-volume-subpath Feb 5 12:56:47.014: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gf6t" in namespace "subpath-1594" to be "success or failure" Feb 5 12:56:47.024: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.336392ms Feb 5 12:56:49.033: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018937125s Feb 5 12:56:51.040: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026518662s Feb 5 12:56:53.050: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03610953s Feb 5 12:56:55.055: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040813731s Feb 5 12:56:57.063: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 10.048793712s Feb 5 12:56:59.070: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 12.055952531s Feb 5 12:57:01.076: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 14.062653295s Feb 5 12:57:03.084: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 16.07029203s Feb 5 12:57:05.092: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 18.07809002s Feb 5 12:57:07.098: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 20.084619359s Feb 5 12:57:09.961: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 22.947463134s Feb 5 12:57:11.966: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 24.95235771s Feb 5 12:57:13.976: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 26.962603086s Feb 5 12:57:15.986: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Running", Reason="", readiness=true. Elapsed: 28.971973132s Feb 5 12:57:18.000: INFO: Pod "pod-subpath-test-projected-gf6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.986452216s STEP: Saw pod success Feb 5 12:57:18.000: INFO: Pod "pod-subpath-test-projected-gf6t" satisfied condition "success or failure" Feb 5 12:57:18.008: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-gf6t container test-container-subpath-projected-gf6t: STEP: delete the pod Feb 5 12:57:18.196: INFO: Waiting for pod pod-subpath-test-projected-gf6t to disappear Feb 5 12:57:18.250: INFO: Pod pod-subpath-test-projected-gf6t no longer exists STEP: Deleting pod pod-subpath-test-projected-gf6t Feb 5 12:57:18.251: INFO: Deleting pod "pod-subpath-test-projected-gf6t" in namespace "subpath-1594" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:57:18.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1594" for this suite. Feb 5 12:57:26.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:57:26.450: INFO: namespace subpath-1594 deletion completed in 8.184446511s • [SLOW TEST:39.600 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:57:26.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 5 12:57:26.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9097' Feb 5 12:57:28.971: INFO: stderr: "" Feb 5 12:57:28.971: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 5 12:57:29.978: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:29.978: INFO: Found 0 / 1 Feb 5 12:57:30.992: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:30.992: INFO: Found 0 / 1 Feb 5 12:57:31.980: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:31.980: INFO: Found 0 / 1 Feb 5 12:57:32.979: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:32.979: INFO: Found 0 / 1 Feb 5 12:57:33.983: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:33.983: INFO: Found 0 / 1 Feb 5 12:57:35.024: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:35.024: INFO: Found 0 / 1 Feb 5 12:57:35.981: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:35.981: INFO: Found 0 / 1 Feb 5 12:57:36.980: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:36.980: INFO: Found 1 / 1 Feb 5 12:57:36.980: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 5 12:57:36.983: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:36.983: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 5 12:57:36.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-cddvh --namespace=kubectl-9097 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 5 12:57:37.096: INFO: stderr: "" Feb 5 12:57:37.096: INFO: stdout: "pod/redis-master-cddvh patched\n" STEP: checking annotations Feb 5 12:57:37.099: INFO: Selector matched 1 pods for map[app:redis] Feb 5 12:57:37.099: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:57:37.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9097" for this suite. Feb 5 12:57:59.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:57:59.258: INFO: namespace kubectl-9097 deletion completed in 22.156480859s • [SLOW TEST:32.807 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:57:59.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Feb 5 12:57:59.356: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 5 12:57:59.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:57:59.817: INFO: stderr: "" Feb 5 12:57:59.817: INFO: stdout: "service/redis-slave created\n" Feb 5 12:57:59.817: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 5 12:57:59.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:58:00.379: INFO: stderr: "" Feb 5 12:58:00.379: INFO: stdout: "service/redis-master created\n" Feb 5 12:58:00.379: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 5 12:58:00.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:58:00.792: INFO: stderr: "" Feb 5 12:58:00.792: INFO: stdout: "service/frontend created\n" Feb 5 12:58:00.793: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 5 12:58:00.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:58:01.177: INFO: stderr: "" Feb 5 12:58:01.177: INFO: stdout: "deployment.apps/frontend created\n" Feb 5 12:58:01.177: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 5 12:58:01.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:58:01.602: INFO: stderr: "" Feb 5 12:58:01.603: INFO: stdout: "deployment.apps/redis-master created\n" Feb 5 12:58:01.604: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 5 12:58:01.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8887' Feb 5 12:58:02.887: INFO: stderr: "" Feb 5 12:58:02.887: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Feb 5 12:58:02.887: INFO: Waiting for all frontend pods to be Running. Feb 5 12:58:27.939: INFO: Waiting for frontend to serve content. Feb 5 12:58:28.037: INFO: Trying to add a new entry to the guestbook. Feb 5 12:58:28.061: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 5 12:58:28.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:28.260: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:28.260: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 5 12:58:28.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:28.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:28.593: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 5 12:58:28.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:28.934: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:28.934: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 5 12:58:28.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:29.011: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:29.011: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 5 12:58:29.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:29.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:29.113: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 5 12:58:29.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8887' Feb 5 12:58:29.228: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 12:58:29.228: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:58:29.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8887" for this suite. Feb 5 12:59:23.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:59:23.505: INFO: namespace kubectl-8887 deletion completed in 54.270494472s • [SLOW TEST:84.246 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:59:23.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 12:59:23.681: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546" in namespace "projected-3379" to be "success or failure" Feb 5 12:59:23.711: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 30.275113ms Feb 5 12:59:25.719: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037865308s Feb 5 12:59:27.731: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050074653s Feb 5 12:59:29.739: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058341686s Feb 5 12:59:31.750: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06918208s Feb 5 12:59:33.768: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08749097s Feb 5 12:59:35.776: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.095164617s STEP: Saw pod success Feb 5 12:59:35.776: INFO: Pod "downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546" satisfied condition "success or failure" Feb 5 12:59:35.780: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546 container client-container: STEP: delete the pod Feb 5 12:59:35.858: INFO: Waiting for pod downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546 to disappear Feb 5 12:59:35.884: INFO: Pod downwardapi-volume-23309bf3-e733-4aea-ae4a-c23ce445f546 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:59:35.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3379" for this suite. Feb 5 12:59:42.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:59:42.148: INFO: namespace projected-3379 deletion completed in 6.257726428s • [SLOW TEST:18.643 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:59:42.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Feb 5 12:59:42.353: INFO: Waiting up to 5m0s for pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb" in namespace "containers-9501" to be "success or failure" Feb 5 12:59:42.375: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.436131ms Feb 5 12:59:44.383: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029860292s Feb 5 12:59:46.451: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098517858s Feb 5 12:59:48.463: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110076101s Feb 5 12:59:50.472: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119554066s Feb 5 12:59:52.488: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134789254s STEP: Saw pod success Feb 5 12:59:52.488: INFO: Pod "client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb" satisfied condition "success or failure" Feb 5 12:59:52.492: INFO: Trying to get logs from node iruya-node pod client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb container test-container: STEP: delete the pod Feb 5 12:59:52.570: INFO: Waiting for pod client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb to disappear Feb 5 12:59:52.575: INFO: Pod client-containers-d696c6b4-69b6-4849-a400-0baa3f5908eb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 12:59:52.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9501" for this suite. Feb 5 12:59:58.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 12:59:58.811: INFO: namespace containers-9501 deletion completed in 6.232099706s • [SLOW TEST:16.663 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 12:59:58.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-96e8eed8-059e-4175-91ac-4dda39fc01b1 STEP: Creating a pod to test consume secrets Feb 5 12:59:58.950: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81" in namespace "projected-5978" to be "success or failure" Feb 5 12:59:58.961: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81": Phase="Pending", Reason="", readiness=false. Elapsed: 11.251089ms Feb 5 13:00:00.970: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019658678s Feb 5 13:00:02.980: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029477277s Feb 5 13:00:04.994: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0436741s Feb 5 13:00:07.007: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057245614s STEP: Saw pod success Feb 5 13:00:07.008: INFO: Pod "pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81" satisfied condition "success or failure" Feb 5 13:00:07.019: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81 container projected-secret-volume-test: STEP: delete the pod Feb 5 13:00:07.117: INFO: Waiting for pod pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81 to disappear Feb 5 13:00:07.165: INFO: Pod pod-projected-secrets-ac3e0d86-0f5c-4295-aebb-ab2de8367e81 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:00:07.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5978" for this suite. Feb 5 13:00:13.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:00:13.307: INFO: namespace projected-5978 deletion completed in 6.135745983s • [SLOW TEST:14.495 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:00:13.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 5 13:00:13.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9247' Feb 5 13:00:13.586: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 5 13:00:13.586: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Feb 5 13:00:13.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9247' Feb 5 13:00:13.780: INFO: stderr: "" Feb 5 13:00:13.780: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:00:13.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9247" for this suite. Feb 5 13:00:19.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:00:19.918: INFO: namespace kubectl-9247 deletion completed in 6.121219719s • [SLOW TEST:6.610 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:00:19.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 5 13:00:31.376: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:00:31.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-829" for this suite. Feb 5 13:00:51.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:00:51.715: INFO: namespace replicaset-829 deletion completed in 20.269781856s • [SLOW TEST:31.797 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:00:51.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:00:51.878: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46" in namespace "downward-api-8959" to be "success or failure" Feb 5 13:00:51.888: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Pending", Reason="", readiness=false. Elapsed: 9.859987ms Feb 5 13:00:53.897: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019031616s Feb 5 13:00:55.916: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037899699s Feb 5 13:00:57.931: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052900306s Feb 5 13:00:59.981: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103132973s Feb 5 13:01:01.988: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110260977s STEP: Saw pod success Feb 5 13:01:01.988: INFO: Pod "downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46" satisfied condition "success or failure" Feb 5 13:01:02.007: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46 container client-container: STEP: delete the pod Feb 5 13:01:02.105: INFO: Waiting for pod downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46 to disappear Feb 5 13:01:02.129: INFO: Pod downwardapi-volume-07650c8e-dd34-4202-bc85-1e1dbf7fdf46 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:01:02.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8959" for this suite. Feb 5 13:01:08.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:01:08.319: INFO: namespace downward-api-8959 deletion completed in 6.151170225s • [SLOW TEST:16.603 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:01:08.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0205 13:01:19.235890 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 13:01:19.235: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:01:19.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4123" for this suite. Feb 5 13:01:25.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:01:25.384: INFO: namespace gc-4123 deletion completed in 6.142438105s • [SLOW TEST:17.065 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:01:25.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:01:25.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 5 13:01:25.554: INFO: stderr: "" Feb 5 13:01:25.554: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:01:25.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4062" for this suite. Feb 5 13:01:31.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:01:31.843: INFO: namespace kubectl-4062 deletion completed in 6.279775745s • [SLOW TEST:6.458 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:01:31.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 5 13:01:44.820: INFO: Successfully updated pod "annotationupdatec893332b-34d5-4386-abc7-6626d04c6e69" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:01:46.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9206" for this suite. Feb 5 13:02:08.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:02:09.101: INFO: namespace downward-api-9206 deletion completed in 22.178366879s • [SLOW TEST:37.257 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:02:09.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:02:09.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f" in namespace "projected-4943" to be "success or failure" Feb 5 13:02:09.210: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.181384ms Feb 5 13:02:11.227: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028577583s Feb 5 13:02:13.238: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039880935s Feb 5 13:02:15.248: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049522904s Feb 5 13:02:17.255: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056852664s Feb 5 13:02:19.263: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064591201s STEP: Saw pod success Feb 5 13:02:19.263: INFO: Pod "downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f" satisfied condition "success or failure" Feb 5 13:02:19.267: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f container client-container: STEP: delete the pod Feb 5 13:02:19.348: INFO: Waiting for pod downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f to disappear Feb 5 13:02:19.396: INFO: Pod downwardapi-volume-8e65081b-efe0-4df7-8558-3cb7f93ad38f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:02:19.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4943" for this suite. Feb 5 13:02:25.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:02:25.539: INFO: namespace projected-4943 deletion completed in 6.134723417s • [SLOW TEST:16.438 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:02:25.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-785 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 5 13:02:25.675: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 5 13:03:06.626: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-785 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:03:06.626: INFO: >>> kubeConfig: /root/.kube/config I0205 13:03:06.696406 8 log.go:172] (0xc000269d90) (0xc0022d2f00) Create stream I0205 13:03:06.696465 8 log.go:172] (0xc000269d90) (0xc0022d2f00) Stream added, broadcasting: 1 I0205 13:03:06.704483 8 log.go:172] (0xc000269d90) Reply frame received for 1 I0205 13:03:06.704523 8 log.go:172] (0xc000269d90) (0xc000e98140) Create stream I0205 13:03:06.704528 8 log.go:172] (0xc000269d90) (0xc000e98140) Stream added, broadcasting: 3 I0205 13:03:06.705693 8 log.go:172] (0xc000269d90) Reply frame received for 3 I0205 13:03:06.705711 8 log.go:172] (0xc000269d90) (0xc000e981e0) Create stream I0205 13:03:06.705718 8 log.go:172] (0xc000269d90) (0xc000e981e0) Stream added, broadcasting: 5 I0205 13:03:06.707134 8 log.go:172] (0xc000269d90) Reply frame received for 5 I0205 13:03:07.846941 8 log.go:172] (0xc000269d90) Data frame received for 3 I0205 13:03:07.846983 8 log.go:172] (0xc000e98140) (3) Data frame handling I0205 13:03:07.847002 8 log.go:172] (0xc000e98140) (3) Data frame sent I0205 13:03:08.037285 8 log.go:172] (0xc000269d90) (0xc000e98140) Stream removed, broadcasting: 3 I0205 13:03:08.037830 8 log.go:172] (0xc000269d90) Data frame received for 1 I0205 13:03:08.038206 8 log.go:172] (0xc000269d90) (0xc000e981e0) Stream removed, broadcasting: 5 I0205 13:03:08.038395 8 log.go:172] (0xc0022d2f00) (1) Data frame handling I0205 13:03:08.038436 8 log.go:172] (0xc0022d2f00) (1) Data frame sent I0205 13:03:08.038451 8 log.go:172] (0xc000269d90) (0xc0022d2f00) Stream removed, broadcasting: 1 I0205 13:03:08.038467 8 log.go:172] (0xc000269d90) Go away received I0205 13:03:08.039416 8 log.go:172] (0xc000269d90) (0xc0022d2f00) Stream removed, broadcasting: 1 I0205 13:03:08.039484 8 log.go:172] (0xc000269d90) (0xc000e98140) Stream removed, broadcasting: 3 I0205 13:03:08.039516 8 log.go:172] (0xc000269d90) (0xc000e981e0) Stream removed, broadcasting: 5 Feb 5 13:03:08.039: INFO: Found all expected endpoints: [netserver-0] Feb 5 13:03:08.046: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-785 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:03:08.046: INFO: >>> kubeConfig: /root/.kube/config I0205 13:03:08.099250 8 log.go:172] (0xc00151c9a0) (0xc0022d3360) Create stream I0205 13:03:08.099297 8 log.go:172] (0xc00151c9a0) (0xc0022d3360) Stream added, broadcasting: 1 I0205 13:03:08.108921 8 log.go:172] (0xc00151c9a0) Reply frame received for 1 I0205 13:03:08.108946 8 log.go:172] (0xc00151c9a0) (0xc000e98500) Create stream I0205 13:03:08.108951 8 log.go:172] (0xc00151c9a0) (0xc000e98500) Stream added, broadcasting: 3 I0205 13:03:08.110263 8 log.go:172] (0xc00151c9a0) Reply frame received for 3 I0205 13:03:08.110293 8 log.go:172] (0xc00151c9a0) (0xc001fb4000) Create stream I0205 13:03:08.110305 8 log.go:172] (0xc00151c9a0) (0xc001fb4000) Stream added, broadcasting: 5 I0205 13:03:08.111377 8 log.go:172] (0xc00151c9a0) Reply frame received for 5 I0205 13:03:09.213593 8 log.go:172] (0xc00151c9a0) Data frame received for 3 I0205 13:03:09.213638 8 log.go:172] (0xc000e98500) (3) Data frame handling I0205 13:03:09.213659 8 log.go:172] (0xc000e98500) (3) Data frame sent I0205 13:03:09.416985 8 log.go:172] (0xc00151c9a0) (0xc000e98500) Stream removed, broadcasting: 3 I0205 13:03:09.417082 8 log.go:172] (0xc00151c9a0) Data frame received for 1 I0205 13:03:09.417101 8 log.go:172] (0xc0022d3360) (1) Data frame handling I0205 13:03:09.417121 8 log.go:172] (0xc0022d3360) (1) Data frame sent I0205 13:03:09.417192 8 log.go:172] (0xc00151c9a0) (0xc0022d3360) Stream removed, broadcasting: 1 I0205 13:03:09.417338 8 log.go:172] (0xc00151c9a0) (0xc001fb4000) Stream removed, broadcasting: 5 I0205 13:03:09.417396 8 log.go:172] (0xc00151c9a0) (0xc0022d3360) Stream removed, broadcasting: 1 I0205 13:03:09.417416 8 log.go:172] (0xc00151c9a0) (0xc000e98500) Stream removed, broadcasting: 3 I0205 13:03:09.417429 8 log.go:172] (0xc00151c9a0) (0xc001fb4000) Stream removed, broadcasting: 5 I0205 13:03:09.417507 8 log.go:172] (0xc00151c9a0) Go away received Feb 5 13:03:09.418: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:03:09.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-785" for this suite. Feb 5 13:03:35.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:03:35.593: INFO: namespace pod-network-test-785 deletion completed in 26.164682015s • [SLOW TEST:70.054 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:03:35.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:03:35.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6693' Feb 5 13:03:36.025: INFO: stderr: "" Feb 5 13:03:36.025: INFO: stdout: "replicationcontroller/redis-master created\n" Feb 5 13:03:36.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6693' Feb 5 13:03:36.413: INFO: stderr: "" Feb 5 13:03:36.413: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Feb 5 13:03:37.422: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:37.422: INFO: Found 0 / 1 Feb 5 13:03:38.426: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:38.426: INFO: Found 0 / 1 Feb 5 13:03:39.425: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:39.425: INFO: Found 0 / 1 Feb 5 13:03:40.426: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:40.426: INFO: Found 0 / 1 Feb 5 13:03:41.425: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:41.425: INFO: Found 0 / 1 Feb 5 13:03:42.448: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:42.448: INFO: Found 0 / 1 Feb 5 13:03:43.438: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:43.438: INFO: Found 0 / 1 Feb 5 13:03:44.424: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:44.424: INFO: Found 1 / 1 Feb 5 13:03:44.424: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 5 13:03:44.428: INFO: Selector matched 1 pods for map[app:redis] Feb 5 13:03:44.428: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 5 13:03:44.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-wjsp2 --namespace=kubectl-6693' Feb 5 13:03:44.577: INFO: stderr: "" Feb 5 13:03:44.577: INFO: stdout: "Name: redis-master-wjsp2\nNamespace: kubectl-6693\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Wed, 05 Feb 2020 13:03:36 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://c2f2340b4cf47a68c7ec4066c1a9da43f21a501bddc90a04290423763618dfd6\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 05 Feb 2020 13:03:42 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-hr9zj (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-hr9zj:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-hr9zj\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 8s default-scheduler Successfully assigned kubectl-6693/redis-master-wjsp2 to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 3s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Feb 5 13:03:44.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6693' Feb 5 13:03:44.674: INFO: stderr: "" Feb 5 13:03:44.674: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6693\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 8s replication-controller Created pod: redis-master-wjsp2\n" Feb 5 13:03:44.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6693' Feb 5 13:03:44.795: INFO: stderr: "" Feb 5 13:03:44.795: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6693\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.71.80\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Feb 5 13:03:44.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Feb 5 13:03:44.915: INFO: stderr: "" Feb 5 13:03:44.915: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Wed, 05 Feb 2020 13:03:33 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 05 Feb 2020 13:03:33 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 05 Feb 2020 13:03:33 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 05 Feb 2020 13:03:33 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 185d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 116d\n kubectl-6693 redis-master-wjsp2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Feb 5 13:03:44.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6693' Feb 5 13:03:45.004: INFO: stderr: "" Feb 5 13:03:45.004: INFO: stdout: "Name: kubectl-6693\nLabels: e2e-framework=kubectl\n e2e-run=70ce5e23-5821-45a6-b3e0-73ae3990110b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:03:45.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6693" for this suite. Feb 5 13:04:07.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:04:07.112: INFO: namespace kubectl-6693 deletion completed in 22.105752576s • [SLOW TEST:31.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:04:07.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-405f9047-c830-40cb-9b3b-77816ab08ad8 STEP: Creating a pod to test consume configMaps Feb 5 13:04:07.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492" in namespace "configmap-5394" to be "success or failure" Feb 5 13:04:07.250: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492": Phase="Pending", Reason="", readiness=false. Elapsed: 17.079511ms Feb 5 13:04:09.258: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025669213s Feb 5 13:04:11.267: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034746077s Feb 5 13:04:13.282: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049828815s Feb 5 13:04:15.290: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057005456s STEP: Saw pod success Feb 5 13:04:15.290: INFO: Pod "pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492" satisfied condition "success or failure" Feb 5 13:04:15.294: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492 container configmap-volume-test: STEP: delete the pod Feb 5 13:04:15.540: INFO: Waiting for pod pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492 to disappear Feb 5 13:04:15.554: INFO: Pod pod-configmaps-3662885b-92e6-4237-b3d7-38267f1eb492 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:04:15.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5394" for this suite. Feb 5 13:04:21.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:04:21.849: INFO: namespace configmap-5394 deletion completed in 6.284945223s • [SLOW TEST:14.736 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:04:21.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 5 13:04:22.036: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 5 13:04:22.050: INFO: Waiting for terminating namespaces to be deleted... Feb 5 13:04:22.058: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 5 13:04:22.138: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 5 13:04:22.139: INFO: Container weave ready: true, restart count 0 Feb 5 13:04:22.139: INFO: Container weave-npc ready: true, restart count 0 Feb 5 13:04:22.139: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.139: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 13:04:22.139: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 5 13:04:22.153: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container etcd ready: true, restart count 0 Feb 5 13:04:22.153: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 5 13:04:22.153: INFO: Container weave ready: true, restart count 0 Feb 5 13:04:22.153: INFO: Container weave-npc ready: true, restart count 0 Feb 5 13:04:22.153: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container coredns ready: true, restart count 0 Feb 5 13:04:22.153: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container kube-controller-manager ready: true, restart count 20 Feb 5 13:04:22.153: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 13:04:22.153: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container kube-apiserver ready: true, restart count 0 Feb 5 13:04:22.153: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container kube-scheduler ready: true, restart count 13 Feb 5 13:04:22.153: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 5 13:04:22.153: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f083531cc1de55], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:04:23.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8564" for this suite. Feb 5 13:04:29.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:04:29.344: INFO: namespace sched-pred-8564 deletion completed in 6.132834235s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.494 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:04:29.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 5 13:04:29.471: INFO: Waiting up to 5m0s for pod "pod-a884340b-f13d-446a-9e98-813925075757" in namespace "emptydir-4119" to be "success or failure" Feb 5 13:04:29.548: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Pending", Reason="", readiness=false. Elapsed: 76.448051ms Feb 5 13:04:31.559: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087624941s Feb 5 13:04:33.570: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098907257s Feb 5 13:04:35.582: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110321031s Feb 5 13:04:37.594: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122355243s Feb 5 13:04:39.603: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131611092s STEP: Saw pod success Feb 5 13:04:39.603: INFO: Pod "pod-a884340b-f13d-446a-9e98-813925075757" satisfied condition "success or failure" Feb 5 13:04:39.613: INFO: Trying to get logs from node iruya-node pod pod-a884340b-f13d-446a-9e98-813925075757 container test-container: STEP: delete the pod Feb 5 13:04:39.681: INFO: Waiting for pod pod-a884340b-f13d-446a-9e98-813925075757 to disappear Feb 5 13:04:39.716: INFO: Pod pod-a884340b-f13d-446a-9e98-813925075757 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:04:39.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4119" for this suite. Feb 5 13:04:45.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:04:46.003: INFO: namespace emptydir-4119 deletion completed in 6.279743538s • [SLOW TEST:16.659 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:04:46.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 5 13:04:46.094: INFO: Waiting up to 5m0s for pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a" in namespace "var-expansion-2085" to be "success or failure" Feb 5 13:04:46.156: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 62.155031ms Feb 5 13:04:48.162: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068664943s Feb 5 13:04:50.171: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07751658s Feb 5 13:04:52.186: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092272314s Feb 5 13:04:54.198: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.104012482s STEP: Saw pod success Feb 5 13:04:54.198: INFO: Pod "var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a" satisfied condition "success or failure" Feb 5 13:04:54.201: INFO: Trying to get logs from node iruya-node pod var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a container dapi-container: STEP: delete the pod Feb 5 13:04:54.296: INFO: Waiting for pod var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a to disappear Feb 5 13:04:54.318: INFO: Pod var-expansion-840ac0d8-b39a-4157-92d8-8e2974e30e7a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:04:54.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2085" for this suite. Feb 5 13:05:00.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:05:00.627: INFO: namespace var-expansion-2085 deletion completed in 6.263660824s • [SLOW TEST:14.623 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:05:00.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-19fa7734-b903-4881-8e50-9c167701cbf4 STEP: Creating a pod to test consume configMaps Feb 5 13:05:00.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f" in namespace "projected-3583" to be "success or failure" Feb 5 13:05:00.966: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.982232ms Feb 5 13:05:03.000: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052761333s Feb 5 13:05:05.006: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058400134s Feb 5 13:05:07.011: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063934312s Feb 5 13:05:09.023: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075652666s Feb 5 13:05:11.038: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09037139s STEP: Saw pod success Feb 5 13:05:11.038: INFO: Pod "pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f" satisfied condition "success or failure" Feb 5 13:05:11.139: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f container projected-configmap-volume-test: STEP: delete the pod Feb 5 13:05:11.192: INFO: Waiting for pod pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f to disappear Feb 5 13:05:11.196: INFO: Pod pod-projected-configmaps-2245a35b-682e-4234-bf89-d0158823d62f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:05:11.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3583" for this suite. Feb 5 13:05:17.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:05:17.325: INFO: namespace projected-3583 deletion completed in 6.123964867s • [SLOW TEST:16.697 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:05:17.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:05:25.627: INFO: Waiting up to 5m0s for pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8" in namespace "pods-1606" to be "success or failure" Feb 5 13:05:25.657: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.564307ms Feb 5 13:05:27.665: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037339677s Feb 5 13:05:29.672: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044477456s Feb 5 13:05:31.679: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051668222s Feb 5 13:05:33.703: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075760646s STEP: Saw pod success Feb 5 13:05:33.704: INFO: Pod "client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8" satisfied condition "success or failure" Feb 5 13:05:33.711: INFO: Trying to get logs from node iruya-node pod client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8 container env3cont: STEP: delete the pod Feb 5 13:05:34.035: INFO: Waiting for pod client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8 to disappear Feb 5 13:05:34.050: INFO: Pod client-envvars-b3ccf5e3-cc64-46cd-a48d-ed3a6ddb2dd8 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:05:34.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1606" for this suite. Feb 5 13:06:20.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:06:20.315: INFO: namespace pods-1606 deletion completed in 46.24738037s • [SLOW TEST:62.989 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:06:20.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:06:20.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c" in namespace "downward-api-1523" to be "success or failure" Feb 5 13:06:20.457: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.13461ms Feb 5 13:06:22.468: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038767793s Feb 5 13:06:24.478: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049222206s Feb 5 13:06:26.499: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070223157s Feb 5 13:06:28.506: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077225122s Feb 5 13:06:30.518: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088710426s STEP: Saw pod success Feb 5 13:06:30.518: INFO: Pod "downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c" satisfied condition "success or failure" Feb 5 13:06:30.527: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c container client-container: STEP: delete the pod Feb 5 13:06:30.861: INFO: Waiting for pod downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c to disappear Feb 5 13:06:30.878: INFO: Pod downwardapi-volume-db82351c-1061-4e05-abe8-7dde6afc7f2c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:06:30.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1523" for this suite. Feb 5 13:06:36.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:06:37.028: INFO: namespace downward-api-1523 deletion completed in 6.138770912s • [SLOW TEST:16.714 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:06:37.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-xwt2 STEP: Creating a pod to test atomic-volume-subpath Feb 5 13:06:37.181: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xwt2" in namespace "subpath-5072" to be "success or failure" Feb 5 13:06:37.186: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.483466ms Feb 5 13:06:39.198: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016763388s Feb 5 13:06:41.207: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026403043s Feb 5 13:06:43.222: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040567074s Feb 5 13:06:45.229: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 8.047837327s Feb 5 13:06:47.237: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 10.055749425s Feb 5 13:06:49.245: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 12.063554988s Feb 5 13:06:51.255: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 14.074354876s Feb 5 13:06:53.264: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 16.082480765s Feb 5 13:06:55.269: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 18.088404967s Feb 5 13:06:57.278: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 20.097433099s Feb 5 13:06:59.286: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 22.104757157s Feb 5 13:07:01.295: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 24.113588391s Feb 5 13:07:03.310: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Running", Reason="", readiness=true. Elapsed: 26.128866268s Feb 5 13:07:05.319: INFO: Pod "pod-subpath-test-configmap-xwt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.137701081s STEP: Saw pod success Feb 5 13:07:05.319: INFO: Pod "pod-subpath-test-configmap-xwt2" satisfied condition "success or failure" Feb 5 13:07:05.323: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-xwt2 container test-container-subpath-configmap-xwt2: STEP: delete the pod Feb 5 13:07:05.375: INFO: Waiting for pod pod-subpath-test-configmap-xwt2 to disappear Feb 5 13:07:05.390: INFO: Pod pod-subpath-test-configmap-xwt2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-xwt2 Feb 5 13:07:05.390: INFO: Deleting pod "pod-subpath-test-configmap-xwt2" in namespace "subpath-5072" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:07:05.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5072" for this suite. Feb 5 13:07:11.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:07:11.646: INFO: namespace subpath-5072 deletion completed in 6.250908831s • [SLOW TEST:34.617 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:07:11.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:07:11.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5225" for this suite. Feb 5 13:07:17.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:07:18.018: INFO: namespace kubelet-test-5225 deletion completed in 6.188306156s • [SLOW TEST:6.371 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:07:18.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 5 13:07:26.642: INFO: Successfully updated pod "pod-update-7ba7dc0b-2d8e-440d-b53d-ace1130f2431" STEP: verifying the updated pod is in kubernetes Feb 5 13:07:26.779: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:07:26.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1021" for this suite. Feb 5 13:07:48.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:07:48.986: INFO: namespace pods-1021 deletion completed in 22.20187823s • [SLOW TEST:30.968 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:07:48.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-48718036-6126-430c-a822-84a9acc199ad STEP: Creating configMap with name cm-test-opt-upd-badd57a9-8b4a-4029-9c32-91827f0a43c1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-48718036-6126-430c-a822-84a9acc199ad STEP: Updating configmap cm-test-opt-upd-badd57a9-8b4a-4029-9c32-91827f0a43c1 STEP: Creating configMap with name cm-test-opt-create-eb78a80e-e9a4-4c07-916e-c6e10f80ad56 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:09:11.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7704" for this suite. Feb 5 13:09:35.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:09:35.365: INFO: namespace projected-7704 deletion completed in 24.13953563s • [SLOW TEST:106.379 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:09:35.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:09:35.562: INFO: Create a RollingUpdate DaemonSet Feb 5 13:09:35.568: INFO: Check that daemon pods launch on every node of the cluster Feb 5 13:09:35.590: INFO: Number of nodes with available pods: 0 Feb 5 13:09:35.590: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:36.609: INFO: Number of nodes with available pods: 0 Feb 5 13:09:36.609: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:37.604: INFO: Number of nodes with available pods: 0 Feb 5 13:09:37.604: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:38.648: INFO: Number of nodes with available pods: 0 Feb 5 13:09:38.648: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:39.618: INFO: Number of nodes with available pods: 0 Feb 5 13:09:39.618: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:40.611: INFO: Number of nodes with available pods: 0 Feb 5 13:09:40.611: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:41.601: INFO: Number of nodes with available pods: 0 Feb 5 13:09:41.601: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:42.647: INFO: Number of nodes with available pods: 0 Feb 5 13:09:42.647: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:43.603: INFO: Number of nodes with available pods: 0 Feb 5 13:09:43.603: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:45.958: INFO: Number of nodes with available pods: 0 Feb 5 13:09:45.958: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:47.658: INFO: Number of nodes with available pods: 0 Feb 5 13:09:47.658: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:09:48.616: INFO: Number of nodes with available pods: 1 Feb 5 13:09:48.616: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 5 13:09:49.613: INFO: Number of nodes with available pods: 2 Feb 5 13:09:49.613: INFO: Number of running nodes: 2, number of available pods: 2 Feb 5 13:09:49.613: INFO: Update the DaemonSet to trigger a rollout Feb 5 13:09:49.623: INFO: Updating DaemonSet daemon-set Feb 5 13:09:56.688: INFO: Roll back the DaemonSet before rollout is complete Feb 5 13:09:56.699: INFO: Updating DaemonSet daemon-set Feb 5 13:09:56.699: INFO: Make sure DaemonSet rollback is complete Feb 5 13:09:56.730: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:09:56.730: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:09:57.807: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:09:57.807: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:09:58.808: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:09:58.809: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:09:59.811: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:09:59.811: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:00.807: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:00.807: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:01.809: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:01.809: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:02.810: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:02.810: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:03.813: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:03.813: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:04.815: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:04.815: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:05.815: INFO: Wrong image for pod: daemon-set-fdn6w. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Feb 5 13:10:05.815: INFO: Pod daemon-set-fdn6w is not available Feb 5 13:10:06.809: INFO: Pod daemon-set-8chrs is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9981, will wait for the garbage collector to delete the pods Feb 5 13:10:06.920: INFO: Deleting DaemonSet.extensions daemon-set took: 33.550419ms Feb 5 13:10:07.220: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.528412ms Feb 5 13:10:17.943: INFO: Number of nodes with available pods: 0 Feb 5 13:10:17.943: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 13:10:17.952: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9981/daemonsets","resourceVersion":"23191038"},"items":null} Feb 5 13:10:17.958: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9981/pods","resourceVersion":"23191038"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:10:17.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9981" for this suite. Feb 5 13:10:25.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:10:26.096: INFO: namespace daemonsets-9981 deletion completed in 8.121353879s • [SLOW TEST:50.731 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:10:26.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-9948dd32-f550-4d0f-88a2-2fcf5cdf3950 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:10:26.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5064" for this suite. Feb 5 13:10:32.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:10:32.476: INFO: namespace secrets-5064 deletion completed in 6.116778127s • [SLOW TEST:6.379 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:10:32.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:10:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8119" for this suite. Feb 5 13:10:44.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:10:45.106: INFO: namespace namespaces-8119 deletion completed in 6.144769258s STEP: Destroying namespace "nsdeletetest-4947" for this suite. Feb 5 13:10:45.109: INFO: Namespace nsdeletetest-4947 was already deleted STEP: Destroying namespace "nsdeletetest-5745" for this suite. Feb 5 13:10:51.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:10:51.376: INFO: namespace nsdeletetest-5745 deletion completed in 6.266443361s • [SLOW TEST:18.898 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:10:51.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:10:51.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51" in namespace "projected-6126" to be "success or failure" Feb 5 13:10:51.591: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Pending", Reason="", readiness=false. Elapsed: 15.981518ms Feb 5 13:10:53.600: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024721819s Feb 5 13:10:55.611: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035620935s Feb 5 13:10:57.620: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045087218s Feb 5 13:10:59.787: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.211627177s Feb 5 13:11:01.803: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.227170604s STEP: Saw pod success Feb 5 13:11:01.803: INFO: Pod "downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51" satisfied condition "success or failure" Feb 5 13:11:01.815: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51 container client-container: STEP: delete the pod Feb 5 13:11:01.967: INFO: Waiting for pod downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51 to disappear Feb 5 13:11:02.004: INFO: Pod downwardapi-volume-f1bf6102-b95b-4d9c-bb09-93463b4d8b51 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:11:02.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6126" for this suite. Feb 5 13:11:08.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:11:08.197: INFO: namespace projected-6126 deletion completed in 6.178047827s • [SLOW TEST:16.822 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:11:08.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-af2e7288-61ea-41f1-bf33-4f00a08328a3 STEP: Creating a pod to test consume configMaps Feb 5 13:11:08.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd" in namespace "projected-3149" to be "success or failure" Feb 5 13:11:08.522: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd": Phase="Pending", Reason="", readiness=false. Elapsed: 83.441799ms Feb 5 13:11:10.538: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09875883s Feb 5 13:11:12.559: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119963012s Feb 5 13:11:14.573: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133538535s Feb 5 13:11:16.584: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144750672s STEP: Saw pod success Feb 5 13:11:16.584: INFO: Pod "pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd" satisfied condition "success or failure" Feb 5 13:11:16.590: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd container projected-configmap-volume-test: STEP: delete the pod Feb 5 13:11:16.644: INFO: Waiting for pod pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd to disappear Feb 5 13:11:16.652: INFO: Pod pod-projected-configmaps-d684cbcb-bec3-4014-a14a-b870ad52abcd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:11:16.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3149" for this suite. Feb 5 13:11:22.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:11:22.778: INFO: namespace projected-3149 deletion completed in 6.121572063s • [SLOW TEST:14.581 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:11:22.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-1c7c42ff-9f01-44c1-9395-9570357622f9 STEP: Creating a pod to test consume configMaps Feb 5 13:11:22.859: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10" in namespace "configmap-8102" to be "success or failure" Feb 5 13:11:22.913: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10": Phase="Pending", Reason="", readiness=false. Elapsed: 53.024454ms Feb 5 13:11:24.923: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063251536s Feb 5 13:11:26.930: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070573149s Feb 5 13:11:28.944: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084174031s Feb 5 13:11:30.950: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090905815s STEP: Saw pod success Feb 5 13:11:30.951: INFO: Pod "pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10" satisfied condition "success or failure" Feb 5 13:11:30.953: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10 container configmap-volume-test: STEP: delete the pod Feb 5 13:11:31.054: INFO: Waiting for pod pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10 to disappear Feb 5 13:11:31.075: INFO: Pod pod-configmaps-9ecd29d5-466a-4f05-9ac1-2be2f1820c10 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:11:31.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8102" for this suite. Feb 5 13:11:37.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:11:37.195: INFO: namespace configmap-8102 deletion completed in 6.113951133s • [SLOW TEST:14.416 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:11:37.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-q44j STEP: Creating a pod to test atomic-volume-subpath Feb 5 13:11:37.398: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-q44j" in namespace "subpath-742" to be "success or failure" Feb 5 13:11:37.410: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Pending", Reason="", readiness=false. Elapsed: 11.89451ms Feb 5 13:11:39.421: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023482052s Feb 5 13:11:41.441: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04304523s Feb 5 13:11:43.449: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051094006s Feb 5 13:11:45.459: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061166349s Feb 5 13:11:47.473: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 10.075263658s Feb 5 13:11:49.490: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 12.0919004s Feb 5 13:11:51.504: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 14.10591769s Feb 5 13:11:53.511: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 16.113033759s Feb 5 13:11:55.517: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 18.11915572s Feb 5 13:11:57.526: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 20.127964444s Feb 5 13:11:59.535: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 22.137036784s Feb 5 13:12:01.544: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 24.146504734s Feb 5 13:12:03.554: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 26.156044753s Feb 5 13:12:05.559: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Running", Reason="", readiness=true. Elapsed: 28.161252197s Feb 5 13:12:07.566: INFO: Pod "pod-subpath-test-downwardapi-q44j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.168458544s STEP: Saw pod success Feb 5 13:12:07.566: INFO: Pod "pod-subpath-test-downwardapi-q44j" satisfied condition "success or failure" Feb 5 13:12:07.568: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-q44j container test-container-subpath-downwardapi-q44j: STEP: delete the pod Feb 5 13:12:07.651: INFO: Waiting for pod pod-subpath-test-downwardapi-q44j to disappear Feb 5 13:12:07.675: INFO: Pod pod-subpath-test-downwardapi-q44j no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-q44j Feb 5 13:12:07.675: INFO: Deleting pod "pod-subpath-test-downwardapi-q44j" in namespace "subpath-742" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:12:07.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-742" for this suite. Feb 5 13:12:13.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:12:13.847: INFO: namespace subpath-742 deletion completed in 6.155864983s • [SLOW TEST:36.652 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:12:13.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-73b1545f-b026-47fc-abc8-f6800c40f420 in namespace container-probe-3967 Feb 5 13:12:22.048: INFO: Started pod liveness-73b1545f-b026-47fc-abc8-f6800c40f420 in namespace container-probe-3967 STEP: checking the pod's current state and verifying that restartCount is present Feb 5 13:12:22.060: INFO: Initial restart count of pod liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is 0 Feb 5 13:12:42.457: INFO: Restart count of pod container-probe-3967/liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is now 1 (20.396386251s elapsed) Feb 5 13:13:02.635: INFO: Restart count of pod container-probe-3967/liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is now 2 (40.575079056s elapsed) Feb 5 13:13:22.975: INFO: Restart count of pod container-probe-3967/liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is now 3 (1m0.914437224s elapsed) Feb 5 13:13:43.208: INFO: Restart count of pod container-probe-3967/liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is now 4 (1m21.14734624s elapsed) Feb 5 13:14:03.300: INFO: Restart count of pod container-probe-3967/liveness-73b1545f-b026-47fc-abc8-f6800c40f420 is now 5 (1m41.239739639s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:14:03.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3967" for this suite. Feb 5 13:14:09.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:14:09.579: INFO: namespace container-probe-3967 deletion completed in 6.237928908s • [SLOW TEST:115.731 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:14:09.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 5 13:14:09.650: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:14:26.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1266" for this suite. Feb 5 13:14:48.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:14:48.376: INFO: namespace init-container-1266 deletion completed in 22.198878767s • [SLOW TEST:38.797 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:14:48.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 5 13:14:48.501: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 5 13:14:53.596: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:14:54.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5693" for this suite. Feb 5 13:15:00.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:15:01.127: INFO: namespace replication-controller-5693 deletion completed in 6.451956667s • [SLOW TEST:12.750 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:15:01.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-c021eefd-0cfc-40f1-a942-2a6497969bca in namespace container-probe-5022 Feb 5 13:15:13.367: INFO: Started pod liveness-c021eefd-0cfc-40f1-a942-2a6497969bca in namespace container-probe-5022 STEP: checking the pod's current state and verifying that restartCount is present Feb 5 13:15:13.399: INFO: Initial restart count of pod liveness-c021eefd-0cfc-40f1-a942-2a6497969bca is 0 Feb 5 13:15:31.503: INFO: Restart count of pod container-probe-5022/liveness-c021eefd-0cfc-40f1-a942-2a6497969bca is now 1 (18.104158852s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:15:31.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5022" for this suite. Feb 5 13:15:37.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:15:37.850: INFO: namespace container-probe-5022 deletion completed in 6.156874855s • [SLOW TEST:36.722 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:15:37.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 5 13:15:37.977: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix288946980/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:15:38.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7772" for this suite. Feb 5 13:15:44.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:15:44.230: INFO: namespace kubectl-7772 deletion completed in 6.157001703s • [SLOW TEST:6.380 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:15:44.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 5 13:15:44.373: INFO: Waiting up to 5m0s for pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634" in namespace "emptydir-1171" to be "success or failure" Feb 5 13:15:44.393: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Pending", Reason="", readiness=false. Elapsed: 19.917828ms Feb 5 13:15:46.403: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029904817s Feb 5 13:15:49.162: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788189848s Feb 5 13:15:51.174: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Pending", Reason="", readiness=false. Elapsed: 6.800495098s Feb 5 13:15:53.196: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Pending", Reason="", readiness=false. Elapsed: 8.822617844s Feb 5 13:15:55.204: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.830498038s STEP: Saw pod success Feb 5 13:15:55.204: INFO: Pod "pod-2d8495e6-c48c-4173-95f4-d4a4538e9634" satisfied condition "success or failure" Feb 5 13:15:55.211: INFO: Trying to get logs from node iruya-node pod pod-2d8495e6-c48c-4173-95f4-d4a4538e9634 container test-container: STEP: delete the pod Feb 5 13:15:55.313: INFO: Waiting for pod pod-2d8495e6-c48c-4173-95f4-d4a4538e9634 to disappear Feb 5 13:15:55.359: INFO: Pod pod-2d8495e6-c48c-4173-95f4-d4a4538e9634 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:15:55.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1171" for this suite. Feb 5 13:16:01.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:16:01.527: INFO: namespace emptydir-1171 deletion completed in 6.158944548s • [SLOW TEST:17.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:16:01.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:16:07.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6781" for this suite. Feb 5 13:16:13.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:16:13.472: INFO: namespace watch-6781 deletion completed in 6.329308648s • [SLOW TEST:11.945 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:16:13.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 5 13:16:14.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4273' Feb 5 13:16:16.839: INFO: stderr: "" Feb 5 13:16:16.839: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:16:16.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4273' Feb 5 13:16:17.084: INFO: stderr: "" Feb 5 13:16:17.084: INFO: stdout: "update-demo-nautilus-57z8q update-demo-nautilus-xmw6h " Feb 5 13:16:17.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57z8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:17.167: INFO: stderr: "" Feb 5 13:16:17.167: INFO: stdout: "" Feb 5 13:16:17.167: INFO: update-demo-nautilus-57z8q is created but not running Feb 5 13:16:22.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4273' Feb 5 13:16:22.647: INFO: stderr: "" Feb 5 13:16:22.647: INFO: stdout: "update-demo-nautilus-57z8q update-demo-nautilus-xmw6h " Feb 5 13:16:22.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57z8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:23.380: INFO: stderr: "" Feb 5 13:16:23.380: INFO: stdout: "" Feb 5 13:16:23.380: INFO: update-demo-nautilus-57z8q is created but not running Feb 5 13:16:28.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4273' Feb 5 13:16:28.542: INFO: stderr: "" Feb 5 13:16:28.543: INFO: stdout: "update-demo-nautilus-57z8q update-demo-nautilus-xmw6h " Feb 5 13:16:28.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57z8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:28.641: INFO: stderr: "" Feb 5 13:16:28.641: INFO: stdout: "true" Feb 5 13:16:28.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57z8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:28.743: INFO: stderr: "" Feb 5 13:16:28.743: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:16:28.743: INFO: validating pod update-demo-nautilus-57z8q Feb 5 13:16:28.763: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:16:28.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:16:28.763: INFO: update-demo-nautilus-57z8q is verified up and running Feb 5 13:16:28.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmw6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:28.888: INFO: stderr: "" Feb 5 13:16:28.889: INFO: stdout: "true" Feb 5 13:16:28.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xmw6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4273' Feb 5 13:16:28.974: INFO: stderr: "" Feb 5 13:16:28.974: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:16:28.974: INFO: validating pod update-demo-nautilus-xmw6h Feb 5 13:16:28.987: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:16:28.987: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:16:28.987: INFO: update-demo-nautilus-xmw6h is verified up and running STEP: using delete to clean up resources Feb 5 13:16:28.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4273' Feb 5 13:16:29.094: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 13:16:29.094: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 5 13:16:29.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4273' Feb 5 13:16:29.224: INFO: stderr: "No resources found.\n" Feb 5 13:16:29.224: INFO: stdout: "" Feb 5 13:16:29.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4273 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 5 13:16:29.385: INFO: stderr: "" Feb 5 13:16:29.385: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:16:29.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4273" for this suite. Feb 5 13:16:53.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:16:53.571: INFO: namespace kubectl-4273 deletion completed in 24.175520774s • [SLOW TEST:40.098 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:16:53.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:16:53.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373" in namespace "projected-635" to be "success or failure" Feb 5 13:16:53.678: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Pending", Reason="", readiness=false. Elapsed: 9.700325ms Feb 5 13:16:55.686: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018199665s Feb 5 13:16:57.695: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027142631s Feb 5 13:16:59.703: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034796757s Feb 5 13:17:01.711: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04256344s Feb 5 13:17:03.722: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053968481s STEP: Saw pod success Feb 5 13:17:03.722: INFO: Pod "downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373" satisfied condition "success or failure" Feb 5 13:17:03.728: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373 container client-container: STEP: delete the pod Feb 5 13:17:03.840: INFO: Waiting for pod downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373 to disappear Feb 5 13:17:03.868: INFO: Pod downwardapi-volume-ef8fdf80-780b-4409-8424-19eed7247373 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:17:03.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-635" for this suite. Feb 5 13:17:09.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:17:10.020: INFO: namespace projected-635 deletion completed in 6.141074637s • [SLOW TEST:16.448 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:17:10.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ea771e1c-0900-4295-8ccc-f62f9c656344 STEP: Creating a pod to test consume secrets Feb 5 13:17:10.145: INFO: Waiting up to 5m0s for pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833" in namespace "secrets-3269" to be "success or failure" Feb 5 13:17:10.153: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833": Phase="Pending", Reason="", readiness=false. Elapsed: 7.568668ms Feb 5 13:17:12.164: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018697805s Feb 5 13:17:14.173: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027280334s Feb 5 13:17:16.180: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034490146s Feb 5 13:17:18.189: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043263001s STEP: Saw pod success Feb 5 13:17:18.189: INFO: Pod "pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833" satisfied condition "success or failure" Feb 5 13:17:18.193: INFO: Trying to get logs from node iruya-node pod pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833 container secret-volume-test: STEP: delete the pod Feb 5 13:17:18.356: INFO: Waiting for pod pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833 to disappear Feb 5 13:17:18.370: INFO: Pod pod-secrets-28fc41db-de82-4233-93e7-69c36fbc2833 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:17:18.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3269" for this suite. Feb 5 13:17:24.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:17:24.537: INFO: namespace secrets-3269 deletion completed in 6.157708106s • [SLOW TEST:14.517 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:17:24.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2036 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 5 13:17:24.674: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 5 13:18:00.779: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2036 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:18:00.779: INFO: >>> kubeConfig: /root/.kube/config I0205 13:18:00.873132 8 log.go:172] (0xc001b70b00) (0xc002420c80) Create stream I0205 13:18:00.873198 8 log.go:172] (0xc001b70b00) (0xc002420c80) Stream added, broadcasting: 1 I0205 13:18:00.883253 8 log.go:172] (0xc001b70b00) Reply frame received for 1 I0205 13:18:00.883325 8 log.go:172] (0xc001b70b00) (0xc001ea1f40) Create stream I0205 13:18:00.883340 8 log.go:172] (0xc001b70b00) (0xc001ea1f40) Stream added, broadcasting: 3 I0205 13:18:00.885809 8 log.go:172] (0xc001b70b00) Reply frame received for 3 I0205 13:18:00.885858 8 log.go:172] (0xc001b70b00) (0xc002420d20) Create stream I0205 13:18:00.885878 8 log.go:172] (0xc001b70b00) (0xc002420d20) Stream added, broadcasting: 5 I0205 13:18:00.889333 8 log.go:172] (0xc001b70b00) Reply frame received for 5 I0205 13:18:01.073638 8 log.go:172] (0xc001b70b00) Data frame received for 3 I0205 13:18:01.073726 8 log.go:172] (0xc001ea1f40) (3) Data frame handling I0205 13:18:01.073753 8 log.go:172] (0xc001ea1f40) (3) Data frame sent I0205 13:18:01.200138 8 log.go:172] (0xc001b70b00) (0xc001ea1f40) Stream removed, broadcasting: 3 I0205 13:18:01.200341 8 log.go:172] (0xc001b70b00) (0xc002420d20) Stream removed, broadcasting: 5 I0205 13:18:01.200492 8 log.go:172] (0xc001b70b00) Data frame received for 1 I0205 13:18:01.200546 8 log.go:172] (0xc002420c80) (1) Data frame handling I0205 13:18:01.200578 8 log.go:172] (0xc002420c80) (1) Data frame sent I0205 13:18:01.200603 8 log.go:172] (0xc001b70b00) (0xc002420c80) Stream removed, broadcasting: 1 I0205 13:18:01.200670 8 log.go:172] (0xc001b70b00) Go away received I0205 13:18:01.200948 8 log.go:172] (0xc001b70b00) (0xc002420c80) Stream removed, broadcasting: 1 I0205 13:18:01.200963 8 log.go:172] (0xc001b70b00) (0xc001ea1f40) Stream removed, broadcasting: 3 I0205 13:18:01.200970 8 log.go:172] (0xc001b70b00) (0xc002420d20) Stream removed, broadcasting: 5 Feb 5 13:18:01.201: INFO: Waiting for endpoints: map[] Feb 5 13:18:01.208: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-2036 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:18:01.208: INFO: >>> kubeConfig: /root/.kube/config I0205 13:18:01.264656 8 log.go:172] (0xc001e984d0) (0xc001ae0fa0) Create stream I0205 13:18:01.264705 8 log.go:172] (0xc001e984d0) (0xc001ae0fa0) Stream added, broadcasting: 1 I0205 13:18:01.271688 8 log.go:172] (0xc001e984d0) Reply frame received for 1 I0205 13:18:01.271717 8 log.go:172] (0xc001e984d0) (0xc0020c8140) Create stream I0205 13:18:01.271726 8 log.go:172] (0xc001e984d0) (0xc0020c8140) Stream added, broadcasting: 3 I0205 13:18:01.273215 8 log.go:172] (0xc001e984d0) Reply frame received for 3 I0205 13:18:01.273235 8 log.go:172] (0xc001e984d0) (0xc0020c8280) Create stream I0205 13:18:01.273243 8 log.go:172] (0xc001e984d0) (0xc0020c8280) Stream added, broadcasting: 5 I0205 13:18:01.274745 8 log.go:172] (0xc001e984d0) Reply frame received for 5 I0205 13:18:01.422480 8 log.go:172] (0xc001e984d0) Data frame received for 3 I0205 13:18:01.422604 8 log.go:172] (0xc0020c8140) (3) Data frame handling I0205 13:18:01.422637 8 log.go:172] (0xc0020c8140) (3) Data frame sent I0205 13:18:01.576483 8 log.go:172] (0xc001e984d0) Data frame received for 1 I0205 13:18:01.576544 8 log.go:172] (0xc001e984d0) (0xc0020c8140) Stream removed, broadcasting: 3 I0205 13:18:01.576594 8 log.go:172] (0xc001ae0fa0) (1) Data frame handling I0205 13:18:01.576615 8 log.go:172] (0xc001ae0fa0) (1) Data frame sent I0205 13:18:01.576627 8 log.go:172] (0xc001e984d0) (0xc001ae0fa0) Stream removed, broadcasting: 1 I0205 13:18:01.576659 8 log.go:172] (0xc001e984d0) (0xc0020c8280) Stream removed, broadcasting: 5 I0205 13:18:01.576718 8 log.go:172] (0xc001e984d0) Go away received I0205 13:18:01.576769 8 log.go:172] (0xc001e984d0) (0xc001ae0fa0) Stream removed, broadcasting: 1 I0205 13:18:01.576789 8 log.go:172] (0xc001e984d0) (0xc0020c8140) Stream removed, broadcasting: 3 I0205 13:18:01.576806 8 log.go:172] (0xc001e984d0) (0xc0020c8280) Stream removed, broadcasting: 5 Feb 5 13:18:01.576: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:18:01.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2036" for this suite. Feb 5 13:18:23.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:18:23.749: INFO: namespace pod-network-test-2036 deletion completed in 22.164401164s • [SLOW TEST:59.211 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:18:23.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7221.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.131.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.131.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.131.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.131.162_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7221.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7221.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7221.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.131.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.131.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.131.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.131.162_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 13:18:36.089: INFO: Unable to read wheezy_udp@dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.099: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.107: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.116: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.133: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.140: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.148: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.155: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.162: INFO: Unable to read 10.106.131.162_udp@PTR from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.167: INFO: Unable to read 10.106.131.162_tcp@PTR from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.173: INFO: Unable to read jessie_udp@dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.190: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.196: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.201: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.207: INFO: Unable to read jessie_udp@PodARecord from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.212: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.220: INFO: Unable to read 10.106.131.162_udp@PTR from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.228: INFO: Unable to read 10.106.131.162_tcp@PTR from pod dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729: the server could not find the requested resource (get pods dns-test-839528e9-8093-4933-83dd-268e1652e729) Feb 5 13:18:36.228: INFO: Lookups using dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729 failed for: [wheezy_udp@dns-test-service.dns-7221.svc.cluster.local wheezy_tcp@dns-test-service.dns-7221.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.131.162_udp@PTR 10.106.131.162_tcp@PTR jessie_udp@dns-test-service.dns-7221.svc.cluster.local jessie_tcp@dns-test-service.dns-7221.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7221.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-7221.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-7221.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.131.162_udp@PTR 10.106.131.162_tcp@PTR] Feb 5 13:18:41.448: INFO: DNS probes using dns-7221/dns-test-839528e9-8093-4933-83dd-268e1652e729 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:18:41.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7221" for this suite. Feb 5 13:18:47.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:18:47.979: INFO: namespace dns-7221 deletion completed in 6.152864112s • [SLOW TEST:24.230 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:18:47.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:18:48.114: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 5 13:18:50.946: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:18:51.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2500" for this suite. Feb 5 13:19:01.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:19:01.559: INFO: namespace replication-controller-2500 deletion completed in 10.155170832s • [SLOW TEST:13.578 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:19:01.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b6be1080-8bb4-441d-8f0f-9f4ae7a1309f STEP: Creating a pod to test consume secrets Feb 5 13:19:01.759: INFO: Waiting up to 5m0s for pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520" in namespace "secrets-8740" to be "success or failure" Feb 5 13:19:01.764: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 5.073809ms Feb 5 13:19:03.788: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029562347s Feb 5 13:19:05.804: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044814358s Feb 5 13:19:07.812: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053724937s Feb 5 13:19:09.830: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070938294s Feb 5 13:19:11.877: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Pending", Reason="", readiness=false. Elapsed: 10.118097027s Feb 5 13:19:13.888: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.129010693s STEP: Saw pod success Feb 5 13:19:13.888: INFO: Pod "pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520" satisfied condition "success or failure" Feb 5 13:19:13.893: INFO: Trying to get logs from node iruya-node pod pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520 container secret-env-test: STEP: delete the pod Feb 5 13:19:14.065: INFO: Waiting for pod pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520 to disappear Feb 5 13:19:14.069: INFO: Pod pod-secrets-bc82e38a-0652-464a-a29c-182d4acea520 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:19:14.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8740" for this suite. Feb 5 13:19:20.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:19:20.267: INFO: namespace secrets-8740 deletion completed in 6.191883259s • [SLOW TEST:18.708 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:19:20.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 5 13:19:20.395: INFO: Waiting up to 5m0s for pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d" in namespace "containers-310" to be "success or failure" Feb 5 13:19:20.407: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.729135ms Feb 5 13:19:22.413: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018470946s Feb 5 13:19:24.422: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026724673s Feb 5 13:19:26.471: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076224513s Feb 5 13:19:28.487: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091668678s STEP: Saw pod success Feb 5 13:19:28.487: INFO: Pod "client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d" satisfied condition "success or failure" Feb 5 13:19:28.491: INFO: Trying to get logs from node iruya-node pod client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d container test-container: STEP: delete the pod Feb 5 13:19:28.590: INFO: Waiting for pod client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d to disappear Feb 5 13:19:28.598: INFO: Pod client-containers-b02d54dd-2b29-46ac-b007-6bbd395d9b7d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:19:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-310" for this suite. Feb 5 13:19:34.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:19:34.798: INFO: namespace containers-310 deletion completed in 6.195118139s • [SLOW TEST:14.530 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:19:34.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 5 13:19:55.035: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:55.035: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:55.126048 8 log.go:172] (0xc002e28a50) (0xc0024f9ae0) Create stream I0205 13:19:55.126087 8 log.go:172] (0xc002e28a50) (0xc0024f9ae0) Stream added, broadcasting: 1 I0205 13:19:55.139576 8 log.go:172] (0xc002e28a50) Reply frame received for 1 I0205 13:19:55.139634 8 log.go:172] (0xc002e28a50) (0xc0024f9b80) Create stream I0205 13:19:55.139652 8 log.go:172] (0xc002e28a50) (0xc0024f9b80) Stream added, broadcasting: 3 I0205 13:19:55.142005 8 log.go:172] (0xc002e28a50) Reply frame received for 3 I0205 13:19:55.142048 8 log.go:172] (0xc002e28a50) (0xc001df8b40) Create stream I0205 13:19:55.142062 8 log.go:172] (0xc002e28a50) (0xc001df8b40) Stream added, broadcasting: 5 I0205 13:19:55.143992 8 log.go:172] (0xc002e28a50) Reply frame received for 5 I0205 13:19:55.291959 8 log.go:172] (0xc002e28a50) Data frame received for 3 I0205 13:19:55.292233 8 log.go:172] (0xc0024f9b80) (3) Data frame handling I0205 13:19:55.292515 8 log.go:172] (0xc0024f9b80) (3) Data frame sent I0205 13:19:55.465157 8 log.go:172] (0xc002e28a50) Data frame received for 1 I0205 13:19:55.465261 8 log.go:172] (0xc002e28a50) (0xc0024f9b80) Stream removed, broadcasting: 3 I0205 13:19:55.465330 8 log.go:172] (0xc0024f9ae0) (1) Data frame handling I0205 13:19:55.465355 8 log.go:172] (0xc0024f9ae0) (1) Data frame sent I0205 13:19:55.465365 8 log.go:172] (0xc002e28a50) (0xc001df8b40) Stream removed, broadcasting: 5 I0205 13:19:55.465377 8 log.go:172] (0xc002e28a50) (0xc0024f9ae0) Stream removed, broadcasting: 1 I0205 13:19:55.465388 8 log.go:172] (0xc002e28a50) Go away received I0205 13:19:55.465549 8 log.go:172] (0xc002e28a50) (0xc0024f9ae0) Stream removed, broadcasting: 1 I0205 13:19:55.465561 8 log.go:172] (0xc002e28a50) (0xc0024f9b80) Stream removed, broadcasting: 3 I0205 13:19:55.465568 8 log.go:172] (0xc002e28a50) (0xc001df8b40) Stream removed, broadcasting: 5 Feb 5 13:19:55.465: INFO: Exec stderr: "" Feb 5 13:19:55.465: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:55.465: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:55.531501 8 log.go:172] (0xc001777970) (0xc0024e7c20) Create stream I0205 13:19:55.531587 8 log.go:172] (0xc001777970) (0xc0024e7c20) Stream added, broadcasting: 1 I0205 13:19:55.543610 8 log.go:172] (0xc001777970) Reply frame received for 1 I0205 13:19:55.543643 8 log.go:172] (0xc001777970) (0xc00223a960) Create stream I0205 13:19:55.543649 8 log.go:172] (0xc001777970) (0xc00223a960) Stream added, broadcasting: 3 I0205 13:19:55.545256 8 log.go:172] (0xc001777970) Reply frame received for 3 I0205 13:19:55.545279 8 log.go:172] (0xc001777970) (0xc001df8be0) Create stream I0205 13:19:55.545290 8 log.go:172] (0xc001777970) (0xc001df8be0) Stream added, broadcasting: 5 I0205 13:19:55.546847 8 log.go:172] (0xc001777970) Reply frame received for 5 I0205 13:19:55.710809 8 log.go:172] (0xc001777970) Data frame received for 3 I0205 13:19:55.710911 8 log.go:172] (0xc00223a960) (3) Data frame handling I0205 13:19:55.710941 8 log.go:172] (0xc00223a960) (3) Data frame sent I0205 13:19:55.931773 8 log.go:172] (0xc001777970) (0xc00223a960) Stream removed, broadcasting: 3 I0205 13:19:55.931942 8 log.go:172] (0xc001777970) Data frame received for 1 I0205 13:19:55.931981 8 log.go:172] (0xc0024e7c20) (1) Data frame handling I0205 13:19:55.931996 8 log.go:172] (0xc001777970) (0xc001df8be0) Stream removed, broadcasting: 5 I0205 13:19:55.932037 8 log.go:172] (0xc0024e7c20) (1) Data frame sent I0205 13:19:55.932069 8 log.go:172] (0xc001777970) (0xc0024e7c20) Stream removed, broadcasting: 1 I0205 13:19:55.932089 8 log.go:172] (0xc001777970) Go away received I0205 13:19:55.932254 8 log.go:172] (0xc001777970) (0xc0024e7c20) Stream removed, broadcasting: 1 I0205 13:19:55.932274 8 log.go:172] (0xc001777970) (0xc00223a960) Stream removed, broadcasting: 3 I0205 13:19:55.932282 8 log.go:172] (0xc001777970) (0xc001df8be0) Stream removed, broadcasting: 5 Feb 5 13:19:55.932: INFO: Exec stderr: "" Feb 5 13:19:55.932: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:55.932: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:56.027840 8 log.go:172] (0xc00270c420) (0xc0024e7f40) Create stream I0205 13:19:56.027901 8 log.go:172] (0xc00270c420) (0xc0024e7f40) Stream added, broadcasting: 1 I0205 13:19:56.035152 8 log.go:172] (0xc00270c420) Reply frame received for 1 I0205 13:19:56.035179 8 log.go:172] (0xc00270c420) (0xc00223aaa0) Create stream I0205 13:19:56.035187 8 log.go:172] (0xc00270c420) (0xc00223aaa0) Stream added, broadcasting: 3 I0205 13:19:56.037395 8 log.go:172] (0xc00270c420) Reply frame received for 3 I0205 13:19:56.037434 8 log.go:172] (0xc00270c420) (0xc001bc2960) Create stream I0205 13:19:56.037446 8 log.go:172] (0xc00270c420) (0xc001bc2960) Stream added, broadcasting: 5 I0205 13:19:56.039027 8 log.go:172] (0xc00270c420) Reply frame received for 5 I0205 13:19:56.184235 8 log.go:172] (0xc00270c420) Data frame received for 3 I0205 13:19:56.184293 8 log.go:172] (0xc00223aaa0) (3) Data frame handling I0205 13:19:56.184326 8 log.go:172] (0xc00223aaa0) (3) Data frame sent I0205 13:19:56.291426 8 log.go:172] (0xc00270c420) Data frame received for 1 I0205 13:19:56.291498 8 log.go:172] (0xc0024e7f40) (1) Data frame handling I0205 13:19:56.291512 8 log.go:172] (0xc0024e7f40) (1) Data frame sent I0205 13:19:56.291530 8 log.go:172] (0xc00270c420) (0xc0024e7f40) Stream removed, broadcasting: 1 I0205 13:19:56.291584 8 log.go:172] (0xc00270c420) (0xc00223aaa0) Stream removed, broadcasting: 3 I0205 13:19:56.291908 8 log.go:172] (0xc00270c420) (0xc001bc2960) Stream removed, broadcasting: 5 I0205 13:19:56.291941 8 log.go:172] (0xc00270c420) (0xc0024e7f40) Stream removed, broadcasting: 1 I0205 13:19:56.291966 8 log.go:172] (0xc00270c420) (0xc00223aaa0) Stream removed, broadcasting: 3 I0205 13:19:56.291974 8 log.go:172] (0xc00270c420) (0xc001bc2960) Stream removed, broadcasting: 5 Feb 5 13:19:56.292: INFO: Exec stderr: "" I0205 13:19:56.292116 8 log.go:172] (0xc00270c420) Go away received Feb 5 13:19:56.292: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:56.292: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:56.375869 8 log.go:172] (0xc001be14a0) (0xc00223ad20) Create stream I0205 13:19:56.375959 8 log.go:172] (0xc001be14a0) (0xc00223ad20) Stream added, broadcasting: 1 I0205 13:19:56.384516 8 log.go:172] (0xc001be14a0) Reply frame received for 1 I0205 13:19:56.384579 8 log.go:172] (0xc001be14a0) (0xc00223adc0) Create stream I0205 13:19:56.384589 8 log.go:172] (0xc001be14a0) (0xc00223adc0) Stream added, broadcasting: 3 I0205 13:19:56.386985 8 log.go:172] (0xc001be14a0) Reply frame received for 3 I0205 13:19:56.387039 8 log.go:172] (0xc001be14a0) (0xc0024f9c20) Create stream I0205 13:19:56.387051 8 log.go:172] (0xc001be14a0) (0xc0024f9c20) Stream added, broadcasting: 5 I0205 13:19:56.389027 8 log.go:172] (0xc001be14a0) Reply frame received for 5 I0205 13:19:56.553737 8 log.go:172] (0xc001be14a0) Data frame received for 3 I0205 13:19:56.553911 8 log.go:172] (0xc00223adc0) (3) Data frame handling I0205 13:19:56.553944 8 log.go:172] (0xc00223adc0) (3) Data frame sent I0205 13:19:56.921239 8 log.go:172] (0xc001be14a0) (0xc00223adc0) Stream removed, broadcasting: 3 I0205 13:19:56.921322 8 log.go:172] (0xc001be14a0) Data frame received for 1 I0205 13:19:56.921340 8 log.go:172] (0xc00223ad20) (1) Data frame handling I0205 13:19:56.921351 8 log.go:172] (0xc00223ad20) (1) Data frame sent I0205 13:19:56.921361 8 log.go:172] (0xc001be14a0) (0xc00223ad20) Stream removed, broadcasting: 1 I0205 13:19:56.921389 8 log.go:172] (0xc001be14a0) (0xc0024f9c20) Stream removed, broadcasting: 5 I0205 13:19:56.921448 8 log.go:172] (0xc001be14a0) Go away received I0205 13:19:56.921476 8 log.go:172] (0xc001be14a0) (0xc00223ad20) Stream removed, broadcasting: 1 I0205 13:19:56.921487 8 log.go:172] (0xc001be14a0) (0xc00223adc0) Stream removed, broadcasting: 3 I0205 13:19:56.921500 8 log.go:172] (0xc001be14a0) (0xc0024f9c20) Stream removed, broadcasting: 5 Feb 5 13:19:56.921: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 5 13:19:56.921: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:56.921: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:56.982947 8 log.go:172] (0xc001693970) (0xc001bc2dc0) Create stream I0205 13:19:56.983008 8 log.go:172] (0xc001693970) (0xc001bc2dc0) Stream added, broadcasting: 1 I0205 13:19:56.991908 8 log.go:172] (0xc001693970) Reply frame received for 1 I0205 13:19:56.991934 8 log.go:172] (0xc001693970) (0xc001bc2e60) Create stream I0205 13:19:56.991939 8 log.go:172] (0xc001693970) (0xc001bc2e60) Stream added, broadcasting: 3 I0205 13:19:56.996107 8 log.go:172] (0xc001693970) Reply frame received for 3 I0205 13:19:56.996135 8 log.go:172] (0xc001693970) (0xc001bc2fa0) Create stream I0205 13:19:56.996140 8 log.go:172] (0xc001693970) (0xc001bc2fa0) Stream added, broadcasting: 5 I0205 13:19:56.997789 8 log.go:172] (0xc001693970) Reply frame received for 5 I0205 13:19:57.088358 8 log.go:172] (0xc001693970) Data frame received for 3 I0205 13:19:57.088413 8 log.go:172] (0xc001bc2e60) (3) Data frame handling I0205 13:19:57.088456 8 log.go:172] (0xc001bc2e60) (3) Data frame sent I0205 13:19:57.190998 8 log.go:172] (0xc001693970) Data frame received for 1 I0205 13:19:57.191063 8 log.go:172] (0xc001693970) (0xc001bc2e60) Stream removed, broadcasting: 3 I0205 13:19:57.191094 8 log.go:172] (0xc001bc2dc0) (1) Data frame handling I0205 13:19:57.191101 8 log.go:172] (0xc001bc2dc0) (1) Data frame sent I0205 13:19:57.191106 8 log.go:172] (0xc001693970) (0xc001bc2dc0) Stream removed, broadcasting: 1 I0205 13:19:57.191132 8 log.go:172] (0xc001693970) (0xc001bc2fa0) Stream removed, broadcasting: 5 I0205 13:19:57.191190 8 log.go:172] (0xc001693970) Go away received I0205 13:19:57.191209 8 log.go:172] (0xc001693970) (0xc001bc2dc0) Stream removed, broadcasting: 1 I0205 13:19:57.191226 8 log.go:172] (0xc001693970) (0xc001bc2e60) Stream removed, broadcasting: 3 I0205 13:19:57.191234 8 log.go:172] (0xc001693970) (0xc001bc2fa0) Stream removed, broadcasting: 5 Feb 5 13:19:57.191: INFO: Exec stderr: "" Feb 5 13:19:57.191: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:57.191: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:57.238763 8 log.go:172] (0xc002f0e790) (0xc00223b0e0) Create stream I0205 13:19:57.238808 8 log.go:172] (0xc002f0e790) (0xc00223b0e0) Stream added, broadcasting: 1 I0205 13:19:57.244190 8 log.go:172] (0xc002f0e790) Reply frame received for 1 I0205 13:19:57.244230 8 log.go:172] (0xc002f0e790) (0xc001df8c80) Create stream I0205 13:19:57.244243 8 log.go:172] (0xc002f0e790) (0xc001df8c80) Stream added, broadcasting: 3 I0205 13:19:57.245982 8 log.go:172] (0xc002f0e790) Reply frame received for 3 I0205 13:19:57.246004 8 log.go:172] (0xc002f0e790) (0xc001bc30e0) Create stream I0205 13:19:57.246015 8 log.go:172] (0xc002f0e790) (0xc001bc30e0) Stream added, broadcasting: 5 I0205 13:19:57.248295 8 log.go:172] (0xc002f0e790) Reply frame received for 5 I0205 13:19:57.322105 8 log.go:172] (0xc002f0e790) Data frame received for 3 I0205 13:19:57.322339 8 log.go:172] (0xc001df8c80) (3) Data frame handling I0205 13:19:57.322358 8 log.go:172] (0xc001df8c80) (3) Data frame sent I0205 13:19:57.420583 8 log.go:172] (0xc002f0e790) (0xc001df8c80) Stream removed, broadcasting: 3 I0205 13:19:57.420841 8 log.go:172] (0xc002f0e790) Data frame received for 1 I0205 13:19:57.420861 8 log.go:172] (0xc00223b0e0) (1) Data frame handling I0205 13:19:57.420879 8 log.go:172] (0xc00223b0e0) (1) Data frame sent I0205 13:19:57.420923 8 log.go:172] (0xc002f0e790) (0xc00223b0e0) Stream removed, broadcasting: 1 I0205 13:19:57.421032 8 log.go:172] (0xc002f0e790) (0xc001bc30e0) Stream removed, broadcasting: 5 I0205 13:19:57.421065 8 log.go:172] (0xc002f0e790) (0xc00223b0e0) Stream removed, broadcasting: 1 I0205 13:19:57.421080 8 log.go:172] (0xc002f0e790) (0xc001df8c80) Stream removed, broadcasting: 3 I0205 13:19:57.421096 8 log.go:172] (0xc002f0e790) (0xc001bc30e0) Stream removed, broadcasting: 5 I0205 13:19:57.421176 8 log.go:172] (0xc002f0e790) Go away received Feb 5 13:19:57.421: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 5 13:19:57.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:57.421: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:57.472393 8 log.go:172] (0xc0031d2fd0) (0xc001bc3540) Create stream I0205 13:19:57.472501 8 log.go:172] (0xc0031d2fd0) (0xc001bc3540) Stream added, broadcasting: 1 I0205 13:19:57.477208 8 log.go:172] (0xc0031d2fd0) Reply frame received for 1 I0205 13:19:57.477299 8 log.go:172] (0xc0031d2fd0) (0xc001bc3680) Create stream I0205 13:19:57.477311 8 log.go:172] (0xc0031d2fd0) (0xc001bc3680) Stream added, broadcasting: 3 I0205 13:19:57.478842 8 log.go:172] (0xc0031d2fd0) Reply frame received for 3 I0205 13:19:57.478871 8 log.go:172] (0xc0031d2fd0) (0xc002394140) Create stream I0205 13:19:57.478895 8 log.go:172] (0xc0031d2fd0) (0xc002394140) Stream added, broadcasting: 5 I0205 13:19:57.482311 8 log.go:172] (0xc0031d2fd0) Reply frame received for 5 I0205 13:19:57.576120 8 log.go:172] (0xc0031d2fd0) Data frame received for 3 I0205 13:19:57.576170 8 log.go:172] (0xc001bc3680) (3) Data frame handling I0205 13:19:57.576189 8 log.go:172] (0xc001bc3680) (3) Data frame sent I0205 13:19:57.681814 8 log.go:172] (0xc0031d2fd0) Data frame received for 1 I0205 13:19:57.681872 8 log.go:172] (0xc001bc3540) (1) Data frame handling I0205 13:19:57.681887 8 log.go:172] (0xc001bc3540) (1) Data frame sent I0205 13:19:57.681905 8 log.go:172] (0xc0031d2fd0) (0xc001bc3540) Stream removed, broadcasting: 1 I0205 13:19:57.681981 8 log.go:172] (0xc0031d2fd0) (0xc001bc3680) Stream removed, broadcasting: 3 I0205 13:19:57.682039 8 log.go:172] (0xc0031d2fd0) (0xc002394140) Stream removed, broadcasting: 5 I0205 13:19:57.682097 8 log.go:172] (0xc0031d2fd0) Go away received I0205 13:19:57.682175 8 log.go:172] (0xc0031d2fd0) (0xc001bc3540) Stream removed, broadcasting: 1 I0205 13:19:57.682235 8 log.go:172] (0xc0031d2fd0) (0xc001bc3680) Stream removed, broadcasting: 3 I0205 13:19:57.682253 8 log.go:172] (0xc0031d2fd0) (0xc002394140) Stream removed, broadcasting: 5 Feb 5 13:19:57.682: INFO: Exec stderr: "" Feb 5 13:19:57.682: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:57.682: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:57.746410 8 log.go:172] (0xc002e29970) (0xc0024f9e00) Create stream I0205 13:19:57.746468 8 log.go:172] (0xc002e29970) (0xc0024f9e00) Stream added, broadcasting: 1 I0205 13:19:57.762103 8 log.go:172] (0xc002e29970) Reply frame received for 1 I0205 13:19:57.762186 8 log.go:172] (0xc002e29970) (0xc00232e0a0) Create stream I0205 13:19:57.762204 8 log.go:172] (0xc002e29970) (0xc00232e0a0) Stream added, broadcasting: 3 I0205 13:19:57.765602 8 log.go:172] (0xc002e29970) Reply frame received for 3 I0205 13:19:57.765693 8 log.go:172] (0xc002e29970) (0xc0000febe0) Create stream I0205 13:19:57.765706 8 log.go:172] (0xc002e29970) (0xc0000febe0) Stream added, broadcasting: 5 I0205 13:19:57.767540 8 log.go:172] (0xc002e29970) Reply frame received for 5 I0205 13:19:57.860949 8 log.go:172] (0xc002e29970) Data frame received for 3 I0205 13:19:57.861021 8 log.go:172] (0xc00232e0a0) (3) Data frame handling I0205 13:19:57.861045 8 log.go:172] (0xc00232e0a0) (3) Data frame sent I0205 13:19:57.957303 8 log.go:172] (0xc002e29970) (0xc00232e0a0) Stream removed, broadcasting: 3 I0205 13:19:57.957416 8 log.go:172] (0xc002e29970) Data frame received for 1 I0205 13:19:57.957430 8 log.go:172] (0xc002e29970) (0xc0000febe0) Stream removed, broadcasting: 5 I0205 13:19:57.957472 8 log.go:172] (0xc0024f9e00) (1) Data frame handling I0205 13:19:57.957492 8 log.go:172] (0xc0024f9e00) (1) Data frame sent I0205 13:19:57.957501 8 log.go:172] (0xc002e29970) (0xc0024f9e00) Stream removed, broadcasting: 1 I0205 13:19:57.957520 8 log.go:172] (0xc002e29970) Go away received I0205 13:19:57.957731 8 log.go:172] (0xc002e29970) (0xc0024f9e00) Stream removed, broadcasting: 1 I0205 13:19:57.957764 8 log.go:172] (0xc002e29970) (0xc00232e0a0) Stream removed, broadcasting: 3 I0205 13:19:57.957775 8 log.go:172] (0xc002e29970) (0xc0000febe0) Stream removed, broadcasting: 5 Feb 5 13:19:57.957: INFO: Exec stderr: "" Feb 5 13:19:57.957: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:57.957: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:58.008302 8 log.go:172] (0xc0005b1810) (0xc001ae0280) Create stream I0205 13:19:58.008347 8 log.go:172] (0xc0005b1810) (0xc001ae0280) Stream added, broadcasting: 1 I0205 13:19:58.016108 8 log.go:172] (0xc0005b1810) Reply frame received for 1 I0205 13:19:58.016139 8 log.go:172] (0xc0005b1810) (0xc0001c80a0) Create stream I0205 13:19:58.016148 8 log.go:172] (0xc0005b1810) (0xc0001c80a0) Stream added, broadcasting: 3 I0205 13:19:58.017550 8 log.go:172] (0xc0005b1810) Reply frame received for 3 I0205 13:19:58.017610 8 log.go:172] (0xc0005b1810) (0xc0000fed20) Create stream I0205 13:19:58.017622 8 log.go:172] (0xc0005b1810) (0xc0000fed20) Stream added, broadcasting: 5 I0205 13:19:58.018667 8 log.go:172] (0xc0005b1810) Reply frame received for 5 I0205 13:19:58.115321 8 log.go:172] (0xc0005b1810) Data frame received for 3 I0205 13:19:58.115358 8 log.go:172] (0xc0001c80a0) (3) Data frame handling I0205 13:19:58.115384 8 log.go:172] (0xc0001c80a0) (3) Data frame sent I0205 13:19:58.226725 8 log.go:172] (0xc0005b1810) Data frame received for 1 I0205 13:19:58.226798 8 log.go:172] (0xc0005b1810) (0xc0001c80a0) Stream removed, broadcasting: 3 I0205 13:19:58.226904 8 log.go:172] (0xc001ae0280) (1) Data frame handling I0205 13:19:58.226925 8 log.go:172] (0xc001ae0280) (1) Data frame sent I0205 13:19:58.226980 8 log.go:172] (0xc0005b1810) (0xc0000fed20) Stream removed, broadcasting: 5 I0205 13:19:58.227010 8 log.go:172] (0xc0005b1810) (0xc001ae0280) Stream removed, broadcasting: 1 I0205 13:19:58.227023 8 log.go:172] (0xc0005b1810) Go away received I0205 13:19:58.227126 8 log.go:172] (0xc0005b1810) (0xc001ae0280) Stream removed, broadcasting: 1 I0205 13:19:58.227143 8 log.go:172] (0xc0005b1810) (0xc0001c80a0) Stream removed, broadcasting: 3 I0205 13:19:58.227155 8 log.go:172] (0xc0005b1810) (0xc0000fed20) Stream removed, broadcasting: 5 Feb 5 13:19:58.227: INFO: Exec stderr: "" Feb 5 13:19:58.227: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9533 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:19:58.227: INFO: >>> kubeConfig: /root/.kube/config I0205 13:19:58.277573 8 log.go:172] (0xc002f0e420) (0xc001ae0780) Create stream I0205 13:19:58.277610 8 log.go:172] (0xc002f0e420) (0xc001ae0780) Stream added, broadcasting: 1 I0205 13:19:58.282965 8 log.go:172] (0xc002f0e420) Reply frame received for 1 I0205 13:19:58.282982 8 log.go:172] (0xc002f0e420) (0xc0001c8280) Create stream I0205 13:19:58.282989 8 log.go:172] (0xc002f0e420) (0xc0001c8280) Stream added, broadcasting: 3 I0205 13:19:58.283787 8 log.go:172] (0xc002f0e420) Reply frame received for 3 I0205 13:19:58.283845 8 log.go:172] (0xc002f0e420) (0xc0000fef00) Create stream I0205 13:19:58.283854 8 log.go:172] (0xc002f0e420) (0xc0000fef00) Stream added, broadcasting: 5 I0205 13:19:58.287199 8 log.go:172] (0xc002f0e420) Reply frame received for 5 I0205 13:19:58.387765 8 log.go:172] (0xc002f0e420) Data frame received for 3 I0205 13:19:58.387876 8 log.go:172] (0xc0001c8280) (3) Data frame handling I0205 13:19:58.387906 8 log.go:172] (0xc0001c8280) (3) Data frame sent I0205 13:19:58.564324 8 log.go:172] (0xc002f0e420) (0xc0001c8280) Stream removed, broadcasting: 3 I0205 13:19:58.564516 8 log.go:172] (0xc002f0e420) Data frame received for 1 I0205 13:19:58.564543 8 log.go:172] (0xc002f0e420) (0xc0000fef00) Stream removed, broadcasting: 5 I0205 13:19:58.564588 8 log.go:172] (0xc001ae0780) (1) Data frame handling I0205 13:19:58.564601 8 log.go:172] (0xc001ae0780) (1) Data frame sent I0205 13:19:58.564612 8 log.go:172] (0xc002f0e420) (0xc001ae0780) Stream removed, broadcasting: 1 I0205 13:19:58.564627 8 log.go:172] (0xc002f0e420) Go away received I0205 13:19:58.565110 8 log.go:172] (0xc002f0e420) (0xc001ae0780) Stream removed, broadcasting: 1 I0205 13:19:58.565127 8 log.go:172] (0xc002f0e420) (0xc0001c8280) Stream removed, broadcasting: 3 I0205 13:19:58.565212 8 log.go:172] (0xc002f0e420) (0xc0000fef00) Stream removed, broadcasting: 5 Feb 5 13:19:58.565: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:19:58.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9533" for this suite. Feb 5 13:20:42.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:20:42.773: INFO: namespace e2e-kubelet-etc-hosts-9533 deletion completed in 44.18967181s • [SLOW TEST:67.974 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:20:42.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 5 13:20:42.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-758' Feb 5 13:20:43.195: INFO: stderr: "" Feb 5 13:20:43.195: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:20:43.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:20:43.487: INFO: stderr: "" Feb 5 13:20:43.487: INFO: stdout: "update-demo-nautilus-2wwxj update-demo-nautilus-ztm6f " Feb 5 13:20:43.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wwxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:43.606: INFO: stderr: "" Feb 5 13:20:43.607: INFO: stdout: "" Feb 5 13:20:43.607: INFO: update-demo-nautilus-2wwxj is created but not running Feb 5 13:20:48.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:20:49.385: INFO: stderr: "" Feb 5 13:20:49.385: INFO: stdout: "update-demo-nautilus-2wwxj update-demo-nautilus-ztm6f " Feb 5 13:20:49.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wwxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:49.734: INFO: stderr: "" Feb 5 13:20:49.734: INFO: stdout: "" Feb 5 13:20:49.734: INFO: update-demo-nautilus-2wwxj is created but not running Feb 5 13:20:54.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:20:54.882: INFO: stderr: "" Feb 5 13:20:54.882: INFO: stdout: "update-demo-nautilus-2wwxj update-demo-nautilus-ztm6f " Feb 5 13:20:54.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wwxj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:55.012: INFO: stderr: "" Feb 5 13:20:55.012: INFO: stdout: "true" Feb 5 13:20:55.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2wwxj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:55.148: INFO: stderr: "" Feb 5 13:20:55.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:20:55.148: INFO: validating pod update-demo-nautilus-2wwxj Feb 5 13:20:55.164: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:20:55.164: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:20:55.164: INFO: update-demo-nautilus-2wwxj is verified up and running Feb 5 13:20:55.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:55.247: INFO: stderr: "" Feb 5 13:20:55.247: INFO: stdout: "true" Feb 5 13:20:55.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:20:55.322: INFO: stderr: "" Feb 5 13:20:55.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:20:55.322: INFO: validating pod update-demo-nautilus-ztm6f Feb 5 13:20:55.333: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:20:55.333: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:20:55.333: INFO: update-demo-nautilus-ztm6f is verified up and running STEP: scaling down the replication controller Feb 5 13:20:55.334: INFO: scanned /root for discovery docs: Feb 5 13:20:55.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-758' Feb 5 13:20:56.481: INFO: stderr: "" Feb 5 13:20:56.481: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:20:56.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:20:56.604: INFO: stderr: "" Feb 5 13:20:56.604: INFO: stdout: "update-demo-nautilus-2wwxj update-demo-nautilus-ztm6f " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 5 13:21:01.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:21:01.752: INFO: stderr: "" Feb 5 13:21:01.752: INFO: stdout: "update-demo-nautilus-2wwxj update-demo-nautilus-ztm6f " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 5 13:21:06.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:21:06.846: INFO: stderr: "" Feb 5 13:21:06.846: INFO: stdout: "update-demo-nautilus-ztm6f " Feb 5 13:21:06.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:06.951: INFO: stderr: "" Feb 5 13:21:06.951: INFO: stdout: "true" Feb 5 13:21:06.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:07.060: INFO: stderr: "" Feb 5 13:21:07.060: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:21:07.060: INFO: validating pod update-demo-nautilus-ztm6f Feb 5 13:21:07.069: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:21:07.069: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:21:07.069: INFO: update-demo-nautilus-ztm6f is verified up and running STEP: scaling up the replication controller Feb 5 13:21:07.072: INFO: scanned /root for discovery docs: Feb 5 13:21:07.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-758' Feb 5 13:21:08.173: INFO: stderr: "" Feb 5 13:21:08.173: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:21:08.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:21:08.282: INFO: stderr: "" Feb 5 13:21:08.282: INFO: stdout: "update-demo-nautilus-jxwrr update-demo-nautilus-ztm6f " Feb 5 13:21:08.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:08.398: INFO: stderr: "" Feb 5 13:21:08.398: INFO: stdout: "" Feb 5 13:21:08.398: INFO: update-demo-nautilus-jxwrr is created but not running Feb 5 13:21:13.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:21:13.580: INFO: stderr: "" Feb 5 13:21:13.580: INFO: stdout: "update-demo-nautilus-jxwrr update-demo-nautilus-ztm6f " Feb 5 13:21:13.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:13.741: INFO: stderr: "" Feb 5 13:21:13.741: INFO: stdout: "" Feb 5 13:21:13.741: INFO: update-demo-nautilus-jxwrr is created but not running Feb 5 13:21:18.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-758' Feb 5 13:21:18.905: INFO: stderr: "" Feb 5 13:21:18.905: INFO: stdout: "update-demo-nautilus-jxwrr update-demo-nautilus-ztm6f " Feb 5 13:21:18.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:19.075: INFO: stderr: "" Feb 5 13:21:19.075: INFO: stdout: "true" Feb 5 13:21:19.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jxwrr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:19.147: INFO: stderr: "" Feb 5 13:21:19.147: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:21:19.147: INFO: validating pod update-demo-nautilus-jxwrr Feb 5 13:21:19.156: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:21:19.156: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:21:19.156: INFO: update-demo-nautilus-jxwrr is verified up and running Feb 5 13:21:19.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:19.251: INFO: stderr: "" Feb 5 13:21:19.251: INFO: stdout: "true" Feb 5 13:21:19.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ztm6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-758' Feb 5 13:21:19.321: INFO: stderr: "" Feb 5 13:21:19.321: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:21:19.321: INFO: validating pod update-demo-nautilus-ztm6f Feb 5 13:21:19.324: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:21:19.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:21:19.324: INFO: update-demo-nautilus-ztm6f is verified up and running STEP: using delete to clean up resources Feb 5 13:21:19.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-758' Feb 5 13:21:19.434: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 13:21:19.434: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 5 13:21:19.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-758' Feb 5 13:21:19.568: INFO: stderr: "No resources found.\n" Feb 5 13:21:19.568: INFO: stdout: "" Feb 5 13:21:19.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-758 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 5 13:21:19.719: INFO: stderr: "" Feb 5 13:21:19.719: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:21:19.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-758" for this suite. Feb 5 13:21:41.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:21:41.888: INFO: namespace kubectl-758 deletion completed in 22.138511873s • [SLOW TEST:59.115 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:21:41.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Feb 5 13:21:42.028: INFO: Waiting up to 5m0s for pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a" in namespace "var-expansion-9361" to be "success or failure" Feb 5 13:21:42.041: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.613424ms Feb 5 13:21:44.055: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026713368s Feb 5 13:21:46.064: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035550918s Feb 5 13:21:48.072: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04385424s Feb 5 13:21:50.086: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057683595s Feb 5 13:21:52.098: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069759877s STEP: Saw pod success Feb 5 13:21:52.098: INFO: Pod "var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a" satisfied condition "success or failure" Feb 5 13:21:52.101: INFO: Trying to get logs from node iruya-node pod var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a container dapi-container: STEP: delete the pod Feb 5 13:21:52.248: INFO: Waiting for pod var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a to disappear Feb 5 13:21:52.264: INFO: Pod var-expansion-0c1779cd-0062-4d0a-8920-bef26d03c40a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:21:52.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9361" for this suite. Feb 5 13:21:58.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:21:58.503: INFO: namespace var-expansion-9361 deletion completed in 6.199528877s • [SLOW TEST:16.614 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:21:58.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:21:58.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef" in namespace "projected-4959" to be "success or failure" Feb 5 13:21:58.670: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 7.057038ms Feb 5 13:22:00.683: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020206858s Feb 5 13:22:02.696: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03292045s Feb 5 13:22:04.701: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037771841s Feb 5 13:22:06.713: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049958844s STEP: Saw pod success Feb 5 13:22:06.713: INFO: Pod "downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef" satisfied condition "success or failure" Feb 5 13:22:06.717: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef container client-container: STEP: delete the pod Feb 5 13:22:06.837: INFO: Waiting for pod downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef to disappear Feb 5 13:22:06.848: INFO: Pod downwardapi-volume-3a821b43-7c93-4d0f-9c2a-cfb0789b66ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:22:06.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4959" for this suite. Feb 5 13:22:12.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:22:13.018: INFO: namespace projected-4959 deletion completed in 6.159350135s • [SLOW TEST:14.515 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:22:13.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:22:13.092: INFO: Creating deployment "test-recreate-deployment" Feb 5 13:22:13.099: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 5 13:22:13.105: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 5 13:22:15.116: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 5 13:22:15.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 13:22:17.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 13:22:19.127: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 13:22:21.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505733, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 13:22:23.128: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 5 13:22:23.134: INFO: Updating deployment test-recreate-deployment Feb 5 13:22:23.134: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 5 13:22:23.479: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-584,SelfLink:/apis/apps/v1/namespaces/deployment-584/deployments/test-recreate-deployment,UID:8724fd7c-7744-41fa-96ff-32df14c0bf98,ResourceVersion:23193019,Generation:2,CreationTimestamp:2020-02-05 13:22:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-05 13:22:23 +0000 UTC 2020-02-05 13:22:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-05 13:22:23 +0000 UTC 2020-02-05 13:22:13 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 5 13:22:23.483: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-584,SelfLink:/apis/apps/v1/namespaces/deployment-584/replicasets/test-recreate-deployment-5c8c9cc69d,UID:00a66c36-9160-4ac7-90fe-168ee85a06ef,ResourceVersion:23193018,Generation:1,CreationTimestamp:2020-02-05 13:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8724fd7c-7744-41fa-96ff-32df14c0bf98 0xc002abe747 0xc002abe748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 5 13:22:23.484: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 5 13:22:23.484: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-584,SelfLink:/apis/apps/v1/namespaces/deployment-584/replicasets/test-recreate-deployment-6df85df6b9,UID:082e5d68-7749-4803-9c16-3d92379f7d9f,ResourceVersion:23193008,Generation:2,CreationTimestamp:2020-02-05 13:22:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8724fd7c-7744-41fa-96ff-32df14c0bf98 0xc002abe817 0xc002abe818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 5 13:22:23.489: INFO: Pod "test-recreate-deployment-5c8c9cc69d-qhbzt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-qhbzt,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-584,SelfLink:/api/v1/namespaces/deployment-584/pods/test-recreate-deployment-5c8c9cc69d-qhbzt,UID:782c60cd-b03f-4f20-87c9-1f7ec0a33087,ResourceVersion:23193020,Generation:0,CreationTimestamp:2020-02-05 13:22:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 00a66c36-9160-4ac7-90fe-168ee85a06ef 0xc002f24937 0xc002f24938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-kc75t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-kc75t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-kc75t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002f249b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002f249d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:22:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:22:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:22:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:22:23 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:22:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:22:23.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-584" for this suite. Feb 5 13:22:29.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:22:29.684: INFO: namespace deployment-584 deletion completed in 6.187793146s • [SLOW TEST:16.666 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:22:29.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 5 13:22:29.879: INFO: PodSpec: initContainers in spec.initContainers Feb 5 13:23:30.690: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e16736af-70fd-41ee-bbd1-f9c16c7e4cda", GenerateName:"", Namespace:"init-container-8070", SelfLink:"/api/v1/namespaces/init-container-8070/pods/pod-init-e16736af-70fd-41ee-bbd1-f9c16c7e4cda", UID:"e44e4772-73cb-409e-a7dd-a94b456b20fc", ResourceVersion:"23193163", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716505749, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"879242302"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-c6s4f", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002304a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c6s4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c6s4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c6s4f", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024c17f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e06f60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024c1880)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0024c18a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0024c18a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0024c18ac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505751, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505751, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505751, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716505749, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002a1de60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00206f340)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00206f3b0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://68e92c046c3d4f2a108353618bef6fc980a5d79a46307f01e52e4b1a5cbc857e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a1dea0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a1de80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:23:30.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8070" for this suite. Feb 5 13:23:52.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:23:52.869: INFO: namespace init-container-8070 deletion completed in 22.168430494s • [SLOW TEST:83.184 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:23:52.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Feb 5 13:23:52.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3613' Feb 5 13:23:53.309: INFO: stderr: "" Feb 5 13:23:53.309: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:23:53.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3613' Feb 5 13:23:53.556: INFO: stderr: "" Feb 5 13:23:53.556: INFO: stdout: "update-demo-nautilus-62q8h update-demo-nautilus-v9mqz " Feb 5 13:23:53.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62q8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:23:53.680: INFO: stderr: "" Feb 5 13:23:53.680: INFO: stdout: "" Feb 5 13:23:53.680: INFO: update-demo-nautilus-62q8h is created but not running Feb 5 13:23:58.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3613' Feb 5 13:23:59.024: INFO: stderr: "" Feb 5 13:23:59.024: INFO: stdout: "update-demo-nautilus-62q8h update-demo-nautilus-v9mqz " Feb 5 13:23:59.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62q8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:23:59.977: INFO: stderr: "" Feb 5 13:23:59.977: INFO: stdout: "" Feb 5 13:23:59.977: INFO: update-demo-nautilus-62q8h is created but not running Feb 5 13:24:04.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3613' Feb 5 13:24:05.090: INFO: stderr: "" Feb 5 13:24:05.090: INFO: stdout: "update-demo-nautilus-62q8h update-demo-nautilus-v9mqz " Feb 5 13:24:05.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62q8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:05.178: INFO: stderr: "" Feb 5 13:24:05.178: INFO: stdout: "true" Feb 5 13:24:05.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62q8h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:05.262: INFO: stderr: "" Feb 5 13:24:05.262: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:24:05.262: INFO: validating pod update-demo-nautilus-62q8h Feb 5 13:24:05.270: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:24:05.270: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:24:05.270: INFO: update-demo-nautilus-62q8h is verified up and running Feb 5 13:24:05.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9mqz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:05.358: INFO: stderr: "" Feb 5 13:24:05.358: INFO: stdout: "true" Feb 5 13:24:05.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v9mqz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:05.455: INFO: stderr: "" Feb 5 13:24:05.455: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 13:24:05.455: INFO: validating pod update-demo-nautilus-v9mqz Feb 5 13:24:05.477: INFO: got data: { "image": "nautilus.jpg" } Feb 5 13:24:05.477: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 13:24:05.477: INFO: update-demo-nautilus-v9mqz is verified up and running STEP: rolling-update to new replication controller Feb 5 13:24:05.480: INFO: scanned /root for discovery docs: Feb 5 13:24:05.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3613' Feb 5 13:24:35.480: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 5 13:24:35.481: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 13:24:35.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3613' Feb 5 13:24:35.646: INFO: stderr: "" Feb 5 13:24:35.646: INFO: stdout: "update-demo-kitten-n4d7r update-demo-kitten-tvvmx update-demo-nautilus-62q8h " STEP: Replicas for name=update-demo: expected=2 actual=3 Feb 5 13:24:40.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3613' Feb 5 13:24:40.793: INFO: stderr: "" Feb 5 13:24:40.793: INFO: stdout: "update-demo-kitten-n4d7r update-demo-kitten-tvvmx " Feb 5 13:24:40.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n4d7r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:40.890: INFO: stderr: "" Feb 5 13:24:40.890: INFO: stdout: "true" Feb 5 13:24:40.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-n4d7r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:40.984: INFO: stderr: "" Feb 5 13:24:40.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 5 13:24:40.984: INFO: validating pod update-demo-kitten-n4d7r Feb 5 13:24:41.025: INFO: got data: { "image": "kitten.jpg" } Feb 5 13:24:41.025: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 5 13:24:41.025: INFO: update-demo-kitten-n4d7r is verified up and running Feb 5 13:24:41.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tvvmx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:41.106: INFO: stderr: "" Feb 5 13:24:41.106: INFO: stdout: "true" Feb 5 13:24:41.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tvvmx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3613' Feb 5 13:24:41.194: INFO: stderr: "" Feb 5 13:24:41.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 5 13:24:41.194: INFO: validating pod update-demo-kitten-tvvmx Feb 5 13:24:41.218: INFO: got data: { "image": "kitten.jpg" } Feb 5 13:24:41.218: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 5 13:24:41.218: INFO: update-demo-kitten-tvvmx is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:24:41.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3613" for this suite. Feb 5 13:25:05.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:25:05.404: INFO: namespace kubectl-3613 deletion completed in 24.182698809s • [SLOW TEST:72.535 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:25:05.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 5 13:25:05.574: INFO: Number of nodes with available pods: 0 Feb 5 13:25:05.574: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:07.098: INFO: Number of nodes with available pods: 0 Feb 5 13:25:07.098: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:07.634: INFO: Number of nodes with available pods: 0 Feb 5 13:25:07.634: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:10.048: INFO: Number of nodes with available pods: 0 Feb 5 13:25:10.048: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:10.631: INFO: Number of nodes with available pods: 0 Feb 5 13:25:10.631: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:11.589: INFO: Number of nodes with available pods: 0 Feb 5 13:25:11.589: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:13.172: INFO: Number of nodes with available pods: 0 Feb 5 13:25:13.172: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:13.904: INFO: Number of nodes with available pods: 0 Feb 5 13:25:13.904: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:14.592: INFO: Number of nodes with available pods: 0 Feb 5 13:25:14.592: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:15.860: INFO: Number of nodes with available pods: 0 Feb 5 13:25:15.860: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:25:16.601: INFO: Number of nodes with available pods: 1 Feb 5 13:25:16.602: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 5 13:25:17.596: INFO: Number of nodes with available pods: 1 Feb 5 13:25:17.596: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 5 13:25:18.622: INFO: Number of nodes with available pods: 2 Feb 5 13:25:18.622: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 5 13:25:18.733: INFO: Number of nodes with available pods: 2 Feb 5 13:25:18.733: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-308, will wait for the garbage collector to delete the pods Feb 5 13:25:19.826: INFO: Deleting DaemonSet.extensions daemon-set took: 15.285902ms Feb 5 13:25:20.126: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.398338ms Feb 5 13:25:26.036: INFO: Number of nodes with available pods: 0 Feb 5 13:25:26.036: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 13:25:26.038: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-308/daemonsets","resourceVersion":"23193513"},"items":null} Feb 5 13:25:26.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-308/pods","resourceVersion":"23193513"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:25:26.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-308" for this suite. Feb 5 13:25:32.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:25:32.192: INFO: namespace daemonsets-308 deletion completed in 6.140894659s • [SLOW TEST:26.787 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:25:32.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:26:19.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8890" for this suite. Feb 5 13:26:26.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:26:26.126: INFO: namespace container-runtime-8890 deletion completed in 6.143014132s • [SLOW TEST:53.934 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:26:26.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:26:26.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3246" for this suite. Feb 5 13:26:32.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:26:32.372: INFO: namespace services-3246 deletion completed in 6.11365124s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.246 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:26:32.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:26:42.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1342" for this suite. Feb 5 13:27:28.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:27:28.746: INFO: namespace kubelet-test-1342 deletion completed in 46.160466612s • [SLOW TEST:56.374 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:27:28.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-051ca6e2-88f0-4c4c-8ba5-d90ec4dca105 STEP: Creating a pod to test consume secrets Feb 5 13:27:28.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8" in namespace "projected-8132" to be "success or failure" Feb 5 13:27:28.916: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.58871ms Feb 5 13:27:30.924: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015817643s Feb 5 13:27:32.930: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02172469s Feb 5 13:27:34.945: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03620045s Feb 5 13:27:36.954: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045224506s STEP: Saw pod success Feb 5 13:27:36.954: INFO: Pod "pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8" satisfied condition "success or failure" Feb 5 13:27:36.958: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8 container secret-volume-test: STEP: delete the pod Feb 5 13:27:36.997: INFO: Waiting for pod pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8 to disappear Feb 5 13:27:37.032: INFO: Pod pod-projected-secrets-4e865d9d-6480-4f7a-91af-1c76f4db5ab8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:27:37.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8132" for this suite. Feb 5 13:27:43.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:27:43.277: INFO: namespace projected-8132 deletion completed in 6.165081157s • [SLOW TEST:14.531 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:27:43.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:27:43.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11" in namespace "projected-3930" to be "success or failure" Feb 5 13:27:43.509: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11": Phase="Pending", Reason="", readiness=false. Elapsed: 11.530403ms Feb 5 13:27:45.570: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072397768s Feb 5 13:27:47.584: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085736802s Feb 5 13:27:49.597: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098748778s Feb 5 13:27:51.606: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10841563s STEP: Saw pod success Feb 5 13:27:51.606: INFO: Pod "downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11" satisfied condition "success or failure" Feb 5 13:27:51.611: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11 container client-container: STEP: delete the pod Feb 5 13:27:51.744: INFO: Waiting for pod downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11 to disappear Feb 5 13:27:51.751: INFO: Pod downwardapi-volume-e818584e-48eb-45dc-a8c0-8c84060eee11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:27:51.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3930" for this suite. Feb 5 13:27:57.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:27:57.957: INFO: namespace projected-3930 deletion completed in 6.180259099s • [SLOW TEST:14.680 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:27:57.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 5 13:27:58.143: INFO: Waiting up to 5m0s for pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c" in namespace "emptydir-7693" to be "success or failure" Feb 5 13:27:58.714: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c": Phase="Pending", Reason="", readiness=false. Elapsed: 570.759811ms Feb 5 13:28:00.721: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.577964709s Feb 5 13:28:02.730: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.586678651s Feb 5 13:28:04.742: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.598997404s Feb 5 13:28:06.749: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.606288747s STEP: Saw pod success Feb 5 13:28:06.749: INFO: Pod "pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c" satisfied condition "success or failure" Feb 5 13:28:06.753: INFO: Trying to get logs from node iruya-node pod pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c container test-container: STEP: delete the pod Feb 5 13:28:06.893: INFO: Waiting for pod pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c to disappear Feb 5 13:28:06.900: INFO: Pod pod-b2d66908-26ee-4957-8f0c-45e563d9ad8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:28:06.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7693" for this suite. Feb 5 13:28:12.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:28:13.051: INFO: namespace emptydir-7693 deletion completed in 6.144891056s • [SLOW TEST:15.092 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:28:13.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 5 13:28:13.207: INFO: Waiting up to 5m0s for pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc" in namespace "emptydir-1561" to be "success or failure" Feb 5 13:28:13.221: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.332441ms Feb 5 13:28:15.271: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06392937s Feb 5 13:28:17.313: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10595084s Feb 5 13:28:19.321: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113534983s Feb 5 13:28:21.330: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122654522s STEP: Saw pod success Feb 5 13:28:21.330: INFO: Pod "pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc" satisfied condition "success or failure" Feb 5 13:28:21.343: INFO: Trying to get logs from node iruya-node pod pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc container test-container: STEP: delete the pod Feb 5 13:28:21.403: INFO: Waiting for pod pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc to disappear Feb 5 13:28:21.407: INFO: Pod pod-9412b4cc-eaa9-4bd5-9f05-bfae31b60adc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:28:21.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1561" for this suite. Feb 5 13:28:27.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:28:27.582: INFO: namespace emptydir-1561 deletion completed in 6.166609547s • [SLOW TEST:14.531 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:28:27.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-0d03949a-b5db-49fd-abd5-cda897c3c890 STEP: Creating a pod to test consume secrets Feb 5 13:28:27.721: INFO: Waiting up to 5m0s for pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09" in namespace "secrets-4573" to be "success or failure" Feb 5 13:28:27.733: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.642738ms Feb 5 13:28:29.742: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02026906s Feb 5 13:28:31.811: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089911393s Feb 5 13:28:33.830: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108453712s Feb 5 13:28:35.895: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.173640998s STEP: Saw pod success Feb 5 13:28:35.895: INFO: Pod "pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09" satisfied condition "success or failure" Feb 5 13:28:35.902: INFO: Trying to get logs from node iruya-node pod pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09 container secret-volume-test: STEP: delete the pod Feb 5 13:28:35.984: INFO: Waiting for pod pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09 to disappear Feb 5 13:28:36.038: INFO: Pod pod-secrets-a2bcf740-7b53-463d-a524-3a841d3e1a09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:28:36.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4573" for this suite. Feb 5 13:28:42.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:28:42.217: INFO: namespace secrets-4573 deletion completed in 6.17091127s • [SLOW TEST:14.635 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:28:42.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ae345a9b-12c7-4103-96fb-6b4b89b4c29c STEP: Creating a pod to test consume configMaps Feb 5 13:28:42.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8" in namespace "configmap-3783" to be "success or failure" Feb 5 13:28:42.384: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.50469ms Feb 5 13:28:44.396: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032679852s Feb 5 13:28:46.414: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051256322s Feb 5 13:28:48.428: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065208749s Feb 5 13:28:50.451: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088063344s STEP: Saw pod success Feb 5 13:28:50.451: INFO: Pod "pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8" satisfied condition "success or failure" Feb 5 13:28:50.471: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8 container configmap-volume-test: STEP: delete the pod Feb 5 13:28:50.652: INFO: Waiting for pod pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8 to disappear Feb 5 13:28:50.702: INFO: Pod pod-configmaps-e660e0fe-aa4c-48bf-a92c-3e6aaa7295f8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:28:50.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3783" for this suite. Feb 5 13:28:56.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:28:56.942: INFO: namespace configmap-3783 deletion completed in 6.231971407s • [SLOW TEST:14.724 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:28:56.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4742/secret-test-2500d68e-1375-4461-9e0a-ed4bfa70d581 STEP: Creating a pod to test consume secrets Feb 5 13:28:57.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274" in namespace "secrets-4742" to be "success or failure" Feb 5 13:28:57.086: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 66.070409ms Feb 5 13:28:59.097: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076462441s Feb 5 13:29:01.105: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084699458s Feb 5 13:29:03.116: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095585838s Feb 5 13:29:05.130: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Pending", Reason="", readiness=false. Elapsed: 8.109625911s Feb 5 13:29:07.139: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118683723s STEP: Saw pod success Feb 5 13:29:07.139: INFO: Pod "pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274" satisfied condition "success or failure" Feb 5 13:29:07.143: INFO: Trying to get logs from node iruya-node pod pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274 container env-test: STEP: delete the pod Feb 5 13:29:07.237: INFO: Waiting for pod pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274 to disappear Feb 5 13:29:07.260: INFO: Pod pod-configmaps-7e3df612-ae42-4a8a-8930-e46926b85274 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:29:07.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4742" for this suite. Feb 5 13:29:13.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:29:13.578: INFO: namespace secrets-4742 deletion completed in 6.311172421s • [SLOW TEST:16.636 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:29:13.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-dca3291d-a1ee-4e05-a27e-d0b265fbcfa3 STEP: Creating configMap with name cm-test-opt-upd-0778c56b-b2fd-4fbf-8e5f-3ba2d37eaaf3 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-dca3291d-a1ee-4e05-a27e-d0b265fbcfa3 STEP: Updating configmap cm-test-opt-upd-0778c56b-b2fd-4fbf-8e5f-3ba2d37eaaf3 STEP: Creating configMap with name cm-test-opt-create-a5bedd55-a2a4-4141-b793-82ec72230268 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:30:53.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2938" for this suite. Feb 5 13:31:16.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:31:16.111: INFO: namespace configmap-2938 deletion completed in 22.124329465s • [SLOW TEST:122.533 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:31:16.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4178, will wait for the garbage collector to delete the pods Feb 5 13:31:26.300: INFO: Deleting Job.batch foo took: 14.39112ms Feb 5 13:31:26.601: INFO: Terminating Job.batch foo pods took: 300.810885ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:32:16.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4178" for this suite. Feb 5 13:32:22.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:32:22.790: INFO: namespace job-4178 deletion completed in 6.1694616s • [SLOW TEST:66.678 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:32:22.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 5 13:32:33.530: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4356 pod-service-account-9f072ab0-272b-4115-876f-201aa16225a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 5 13:32:36.372: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4356 pod-service-account-9f072ab0-272b-4115-876f-201aa16225a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 5 13:32:36.939: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4356 pod-service-account-9f072ab0-272b-4115-876f-201aa16225a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:32:37.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4356" for this suite. Feb 5 13:32:43.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:32:43.599: INFO: namespace svcaccounts-4356 deletion completed in 6.303908994s • [SLOW TEST:20.809 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:32:43.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:32:53.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1567" for this suite. Feb 5 13:33:37.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:33:38.034: INFO: namespace kubelet-test-1567 deletion completed in 44.191325155s • [SLOW TEST:54.435 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:33:38.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-cb8bef02-dfc0-4acb-9cb2-e702e630b271 STEP: Creating a pod to test consume configMaps Feb 5 13:33:38.155: INFO: Waiting up to 5m0s for pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db" in namespace "configmap-8692" to be "success or failure" Feb 5 13:33:38.164: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588925ms Feb 5 13:33:40.177: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021595353s Feb 5 13:33:42.197: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041433278s Feb 5 13:33:44.213: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057431317s Feb 5 13:33:46.232: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076862119s STEP: Saw pod success Feb 5 13:33:46.232: INFO: Pod "pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db" satisfied condition "success or failure" Feb 5 13:33:46.238: INFO: Trying to get logs from node iruya-node pod pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db container configmap-volume-test: STEP: delete the pod Feb 5 13:33:46.332: INFO: Waiting for pod pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db to disappear Feb 5 13:33:46.338: INFO: Pod pod-configmaps-bab0f915-a1c5-4f71-9809-a73398ce76db no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:33:46.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8692" for this suite. Feb 5 13:33:52.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:33:52.498: INFO: namespace configmap-8692 deletion completed in 6.154533685s • [SLOW TEST:14.463 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:33:52.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Feb 5 13:33:52.693: INFO: Waiting up to 5m0s for pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493" in namespace "emptydir-7339" to be "success or failure" Feb 5 13:33:52.726: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493": Phase="Pending", Reason="", readiness=false. Elapsed: 32.536753ms Feb 5 13:33:54.737: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043797232s Feb 5 13:33:56.743: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050325669s Feb 5 13:33:58.751: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057494723s Feb 5 13:34:00.767: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074220146s STEP: Saw pod success Feb 5 13:34:00.767: INFO: Pod "pod-f62a50ed-84b3-4f79-8655-9fd65020a493" satisfied condition "success or failure" Feb 5 13:34:00.774: INFO: Trying to get logs from node iruya-node pod pod-f62a50ed-84b3-4f79-8655-9fd65020a493 container test-container: STEP: delete the pod Feb 5 13:34:00.867: INFO: Waiting for pod pod-f62a50ed-84b3-4f79-8655-9fd65020a493 to disappear Feb 5 13:34:00.872: INFO: Pod pod-f62a50ed-84b3-4f79-8655-9fd65020a493 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:34:00.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7339" for this suite. Feb 5 13:34:06.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:34:07.027: INFO: namespace emptydir-7339 deletion completed in 6.146673933s • [SLOW TEST:14.527 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:34:07.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-512 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-512 to expose endpoints map[] Feb 5 13:34:07.189: INFO: Get endpoints failed (6.02639ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 5 13:34:08.196: INFO: successfully validated that service multi-endpoint-test in namespace services-512 exposes endpoints map[] (1.013125807s elapsed) STEP: Creating pod pod1 in namespace services-512 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-512 to expose endpoints map[pod1:[100]] Feb 5 13:34:12.321: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.113173467s elapsed, will retry) Feb 5 13:34:15.412: INFO: successfully validated that service multi-endpoint-test in namespace services-512 exposes endpoints map[pod1:[100]] (7.203569519s elapsed) STEP: Creating pod pod2 in namespace services-512 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-512 to expose endpoints map[pod1:[100] pod2:[101]] Feb 5 13:34:19.873: INFO: Unexpected endpoints: found map[3c6ba95d-34a9-4e1b-b6a0-4ddf0d295507:[100]], expected map[pod1:[100] pod2:[101]] (4.432731986s elapsed, will retry) Feb 5 13:34:21.991: INFO: successfully validated that service multi-endpoint-test in namespace services-512 exposes endpoints map[pod1:[100] pod2:[101]] (6.549871861s elapsed) STEP: Deleting pod pod1 in namespace services-512 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-512 to expose endpoints map[pod2:[101]] Feb 5 13:34:23.097: INFO: successfully validated that service multi-endpoint-test in namespace services-512 exposes endpoints map[pod2:[101]] (1.101660164s elapsed) STEP: Deleting pod pod2 in namespace services-512 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-512 to expose endpoints map[] Feb 5 13:34:24.254: INFO: successfully validated that service multi-endpoint-test in namespace services-512 exposes endpoints map[] (1.148201373s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:34:25.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-512" for this suite. Feb 5 13:34:47.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:34:47.698: INFO: namespace services-512 deletion completed in 22.307218232s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:40.671 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:34:47.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 5 13:34:47.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8078,SelfLink:/api/v1/namespaces/watch-8078/configmaps/e2e-watch-test-watch-closed,UID:480b7548-eadc-451e-b761-5982bfc463cc,ResourceVersion:23194806,Generation:0,CreationTimestamp:2020-02-05 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 5 13:34:47.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8078,SelfLink:/api/v1/namespaces/watch-8078/configmaps/e2e-watch-test-watch-closed,UID:480b7548-eadc-451e-b761-5982bfc463cc,ResourceVersion:23194807,Generation:0,CreationTimestamp:2020-02-05 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 5 13:34:47.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8078,SelfLink:/api/v1/namespaces/watch-8078/configmaps/e2e-watch-test-watch-closed,UID:480b7548-eadc-451e-b761-5982bfc463cc,ResourceVersion:23194808,Generation:0,CreationTimestamp:2020-02-05 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 5 13:34:47.867: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8078,SelfLink:/api/v1/namespaces/watch-8078/configmaps/e2e-watch-test-watch-closed,UID:480b7548-eadc-451e-b761-5982bfc463cc,ResourceVersion:23194809,Generation:0,CreationTimestamp:2020-02-05 13:34:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:34:47.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8078" for this suite. Feb 5 13:34:53.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:34:54.036: INFO: namespace watch-8078 deletion completed in 6.163517149s • [SLOW TEST:6.338 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:34:54.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Feb 5 13:34:54.136: INFO: Waiting up to 5m0s for pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f" in namespace "var-expansion-4280" to be "success or failure" Feb 5 13:34:54.172: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.795627ms Feb 5 13:34:56.187: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051174427s Feb 5 13:34:58.866: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.730056847s Feb 5 13:35:00.874: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.738193627s Feb 5 13:35:02.881: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.745378062s STEP: Saw pod success Feb 5 13:35:02.881: INFO: Pod "var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f" satisfied condition "success or failure" Feb 5 13:35:02.884: INFO: Trying to get logs from node iruya-node pod var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f container dapi-container: STEP: delete the pod Feb 5 13:35:02.979: INFO: Waiting for pod var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f to disappear Feb 5 13:35:02.987: INFO: Pod var-expansion-78804af1-cd59-4bc1-b0bb-102cbbe9235f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:35:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4280" for this suite. Feb 5 13:35:09.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:35:09.140: INFO: namespace var-expansion-4280 deletion completed in 6.148281695s • [SLOW TEST:15.104 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:35:09.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-945d72e3-969d-4c51-a426-5ac065106036 in namespace container-probe-3005 Feb 5 13:35:17.414: INFO: Started pod busybox-945d72e3-969d-4c51-a426-5ac065106036 in namespace container-probe-3005 STEP: checking the pod's current state and verifying that restartCount is present Feb 5 13:35:17.419: INFO: Initial restart count of pod busybox-945d72e3-969d-4c51-a426-5ac065106036 is 0 Feb 5 13:36:09.672: INFO: Restart count of pod container-probe-3005/busybox-945d72e3-969d-4c51-a426-5ac065106036 is now 1 (52.253008599s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:36:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3005" for this suite. Feb 5 13:36:15.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:36:15.931: INFO: namespace container-probe-3005 deletion completed in 6.194397815s • [SLOW TEST:66.791 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:36:15.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 5 13:36:16.022: INFO: Waiting up to 5m0s for pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011" in namespace "downward-api-7342" to be "success or failure" Feb 5 13:36:16.065: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Pending", Reason="", readiness=false. Elapsed: 43.057093ms Feb 5 13:36:18.074: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051363321s Feb 5 13:36:20.088: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065584741s Feb 5 13:36:22.100: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077477476s Feb 5 13:36:24.106: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084100295s Feb 5 13:36:26.113: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090509628s STEP: Saw pod success Feb 5 13:36:26.113: INFO: Pod "downward-api-f69eb188-8474-4214-9d27-f644e0de1011" satisfied condition "success or failure" Feb 5 13:36:26.116: INFO: Trying to get logs from node iruya-node pod downward-api-f69eb188-8474-4214-9d27-f644e0de1011 container dapi-container: STEP: delete the pod Feb 5 13:36:26.213: INFO: Waiting for pod downward-api-f69eb188-8474-4214-9d27-f644e0de1011 to disappear Feb 5 13:36:26.226: INFO: Pod downward-api-f69eb188-8474-4214-9d27-f644e0de1011 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:36:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7342" for this suite. Feb 5 13:36:32.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:36:32.354: INFO: namespace downward-api-7342 deletion completed in 6.121304442s • [SLOW TEST:16.422 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:36:32.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9528 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Feb 5 13:36:32.491: INFO: Found 0 stateful pods, waiting for 3 Feb 5 13:36:42.509: INFO: Found 2 stateful pods, waiting for 3 Feb 5 13:36:52.508: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 13:36:52.508: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 13:36:52.508: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 5 13:37:02.507: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 13:37:02.507: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 13:37:02.507: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 5 13:37:02.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 5 13:37:03.071: INFO: stderr: "I0205 13:37:02.758948 1713 log.go:172] (0xc00092e0b0) (0xc0008d2640) Create stream\nI0205 13:37:02.759128 1713 log.go:172] (0xc00092e0b0) (0xc0008d2640) Stream added, broadcasting: 1\nI0205 13:37:02.763891 1713 log.go:172] (0xc00092e0b0) Reply frame received for 1\nI0205 13:37:02.763994 1713 log.go:172] (0xc00092e0b0) (0xc0005b4280) Create stream\nI0205 13:37:02.764019 1713 log.go:172] (0xc00092e0b0) (0xc0005b4280) Stream added, broadcasting: 3\nI0205 13:37:02.765805 1713 log.go:172] (0xc00092e0b0) Reply frame received for 3\nI0205 13:37:02.765826 1713 log.go:172] (0xc00092e0b0) (0xc0005b4320) Create stream\nI0205 13:37:02.765834 1713 log.go:172] (0xc00092e0b0) (0xc0005b4320) Stream added, broadcasting: 5\nI0205 13:37:02.766886 1713 log.go:172] (0xc00092e0b0) Reply frame received for 5\nI0205 13:37:02.892049 1713 log.go:172] (0xc00092e0b0) Data frame received for 5\nI0205 13:37:02.892102 1713 log.go:172] (0xc0005b4320) (5) Data frame handling\nI0205 13:37:02.892116 1713 log.go:172] (0xc0005b4320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:37:02.972459 1713 log.go:172] (0xc00092e0b0) Data frame received for 3\nI0205 13:37:02.972492 1713 log.go:172] (0xc0005b4280) (3) Data frame handling\nI0205 13:37:02.972507 1713 log.go:172] (0xc0005b4280) (3) Data frame sent\nI0205 13:37:03.063593 1713 log.go:172] (0xc00092e0b0) (0xc0005b4280) Stream removed, broadcasting: 3\nI0205 13:37:03.063820 1713 log.go:172] (0xc00092e0b0) Data frame received for 1\nI0205 13:37:03.063902 1713 log.go:172] (0xc00092e0b0) (0xc0005b4320) Stream removed, broadcasting: 5\nI0205 13:37:03.063987 1713 log.go:172] (0xc0008d2640) (1) Data frame handling\nI0205 13:37:03.064040 1713 log.go:172] (0xc0008d2640) (1) Data frame sent\nI0205 13:37:03.064059 1713 log.go:172] (0xc00092e0b0) (0xc0008d2640) Stream removed, broadcasting: 1\nI0205 13:37:03.064082 1713 log.go:172] (0xc00092e0b0) Go away received\nI0205 13:37:03.064751 1713 log.go:172] (0xc00092e0b0) (0xc0008d2640) Stream removed, broadcasting: 1\nI0205 13:37:03.064814 1713 log.go:172] (0xc00092e0b0) (0xc0005b4280) Stream removed, broadcasting: 3\nI0205 13:37:03.064852 1713 log.go:172] (0xc00092e0b0) (0xc0005b4320) Stream removed, broadcasting: 5\n" Feb 5 13:37:03.071: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 5 13:37:03.071: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 5 13:37:13.132: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 5 13:37:23.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 5 13:37:23.704: INFO: stderr: "I0205 13:37:23.418066 1731 log.go:172] (0xc0007a6420) (0xc0007a4640) Create stream\nI0205 13:37:23.418175 1731 log.go:172] (0xc0007a6420) (0xc0007a4640) Stream added, broadcasting: 1\nI0205 13:37:23.423063 1731 log.go:172] (0xc0007a6420) Reply frame received for 1\nI0205 13:37:23.423204 1731 log.go:172] (0xc0007a6420) (0xc000726000) Create stream\nI0205 13:37:23.423253 1731 log.go:172] (0xc0007a6420) (0xc000726000) Stream added, broadcasting: 3\nI0205 13:37:23.425023 1731 log.go:172] (0xc0007a6420) Reply frame received for 3\nI0205 13:37:23.425046 1731 log.go:172] (0xc0007a6420) (0xc0007a46e0) Create stream\nI0205 13:37:23.425059 1731 log.go:172] (0xc0007a6420) (0xc0007a46e0) Stream added, broadcasting: 5\nI0205 13:37:23.426635 1731 log.go:172] (0xc0007a6420) Reply frame received for 5\nI0205 13:37:23.514450 1731 log.go:172] (0xc0007a6420) Data frame received for 5\nI0205 13:37:23.514602 1731 log.go:172] (0xc0007a46e0) (5) Data frame handling\nI0205 13:37:23.514660 1731 log.go:172] (0xc0007a46e0) (5) Data frame sent\nI0205 13:37:23.514676 1731 log.go:172] (0xc0007a6420) Data frame received for 5\nI0205 13:37:23.514692 1731 log.go:172] (0xc0007a46e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 13:37:23.514778 1731 log.go:172] (0xc0007a46e0) (5) Data frame sent\nI0205 13:37:23.516120 1731 log.go:172] (0xc0007a6420) Data frame received for 3\nI0205 13:37:23.516233 1731 log.go:172] (0xc000726000) (3) Data frame handling\nI0205 13:37:23.516306 1731 log.go:172] (0xc000726000) (3) Data frame sent\nI0205 13:37:23.683703 1731 log.go:172] (0xc0007a6420) Data frame received for 1\nI0205 13:37:23.684275 1731 log.go:172] (0xc0007a6420) (0xc0007a46e0) Stream removed, broadcasting: 5\nI0205 13:37:23.684422 1731 log.go:172] (0xc0007a4640) (1) Data frame handling\nI0205 13:37:23.684516 1731 log.go:172] (0xc0007a4640) (1) Data frame sent\nI0205 13:37:23.684593 1731 log.go:172] (0xc0007a6420) (0xc000726000) Stream removed, broadcasting: 3\nI0205 13:37:23.684970 1731 log.go:172] (0xc0007a6420) (0xc0007a4640) Stream removed, broadcasting: 1\nI0205 13:37:23.685020 1731 log.go:172] (0xc0007a6420) Go away received\nI0205 13:37:23.686205 1731 log.go:172] (0xc0007a6420) (0xc0007a4640) Stream removed, broadcasting: 1\nI0205 13:37:23.686245 1731 log.go:172] (0xc0007a6420) (0xc000726000) Stream removed, broadcasting: 3\nI0205 13:37:23.686357 1731 log.go:172] (0xc0007a6420) (0xc0007a46e0) Stream removed, broadcasting: 5\n" Feb 5 13:37:23.704: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 5 13:37:23.704: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 5 13:37:23.772: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:37:23.772: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:23.772: INFO: Waiting for Pod statefulset-9528/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:23.772: INFO: Waiting for Pod statefulset-9528/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:33.801: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:37:33.802: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:33.802: INFO: Waiting for Pod statefulset-9528/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:43.794: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:37:43.794: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:43.794: INFO: Waiting for Pod statefulset-9528/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:37:53.790: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:37:53.790: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:38:03.803: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:38:03.803: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 5 13:38:13.801: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update STEP: Rolling back to a previous revision Feb 5 13:38:23.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 5 13:38:24.332: INFO: stderr: "I0205 13:38:24.003534 1753 log.go:172] (0xc0007a8160) (0xc0006d2140) Create stream\nI0205 13:38:24.003748 1753 log.go:172] (0xc0007a8160) (0xc0006d2140) Stream added, broadcasting: 1\nI0205 13:38:24.008833 1753 log.go:172] (0xc0007a8160) Reply frame received for 1\nI0205 13:38:24.008875 1753 log.go:172] (0xc0007a8160) (0xc00059a140) Create stream\nI0205 13:38:24.008881 1753 log.go:172] (0xc0007a8160) (0xc00059a140) Stream added, broadcasting: 3\nI0205 13:38:24.010081 1753 log.go:172] (0xc0007a8160) Reply frame received for 3\nI0205 13:38:24.010104 1753 log.go:172] (0xc0007a8160) (0xc0006d21e0) Create stream\nI0205 13:38:24.010112 1753 log.go:172] (0xc0007a8160) (0xc0006d21e0) Stream added, broadcasting: 5\nI0205 13:38:24.011155 1753 log.go:172] (0xc0007a8160) Reply frame received for 5\nI0205 13:38:24.178767 1753 log.go:172] (0xc0007a8160) Data frame received for 5\nI0205 13:38:24.178819 1753 log.go:172] (0xc0006d21e0) (5) Data frame handling\nI0205 13:38:24.178829 1753 log.go:172] (0xc0006d21e0) (5) Data frame sent\nI0205 13:38:24.178836 1753 log.go:172] (0xc0007a8160) Data frame received for 5\nI0205 13:38:24.178842 1753 log.go:172] (0xc0006d21e0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:38:24.178877 1753 log.go:172] (0xc0006d21e0) (5) Data frame sent\nI0205 13:38:24.219273 1753 log.go:172] (0xc0007a8160) Data frame received for 3\nI0205 13:38:24.219503 1753 log.go:172] (0xc00059a140) (3) Data frame handling\nI0205 13:38:24.219543 1753 log.go:172] (0xc00059a140) (3) Data frame sent\nI0205 13:38:24.326183 1753 log.go:172] (0xc0007a8160) (0xc00059a140) Stream removed, broadcasting: 3\nI0205 13:38:24.326365 1753 log.go:172] (0xc0007a8160) Data frame received for 1\nI0205 13:38:24.326388 1753 log.go:172] (0xc0006d2140) (1) Data frame handling\nI0205 13:38:24.326392 1753 log.go:172] (0xc0006d2140) (1) Data frame sent\nI0205 13:38:24.326398 1753 log.go:172] (0xc0007a8160) (0xc0006d2140) Stream removed, broadcasting: 1\nI0205 13:38:24.326517 1753 log.go:172] (0xc0007a8160) (0xc0006d21e0) Stream removed, broadcasting: 5\nI0205 13:38:24.326580 1753 log.go:172] (0xc0007a8160) Go away received\nI0205 13:38:24.326662 1753 log.go:172] (0xc0007a8160) (0xc0006d2140) Stream removed, broadcasting: 1\nI0205 13:38:24.326677 1753 log.go:172] (0xc0007a8160) (0xc00059a140) Stream removed, broadcasting: 3\nI0205 13:38:24.326687 1753 log.go:172] (0xc0007a8160) (0xc0006d21e0) Stream removed, broadcasting: 5\n" Feb 5 13:38:24.333: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 5 13:38:24.333: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 5 13:38:34.395: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 5 13:38:44.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9528 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 5 13:38:44.897: INFO: stderr: "I0205 13:38:44.625764 1770 log.go:172] (0xc00085a210) (0xc000594640) Create stream\nI0205 13:38:44.626134 1770 log.go:172] (0xc00085a210) (0xc000594640) Stream added, broadcasting: 1\nI0205 13:38:44.651731 1770 log.go:172] (0xc00085a210) Reply frame received for 1\nI0205 13:38:44.651900 1770 log.go:172] (0xc00085a210) (0xc0004b2000) Create stream\nI0205 13:38:44.651923 1770 log.go:172] (0xc00085a210) (0xc0004b2000) Stream added, broadcasting: 3\nI0205 13:38:44.654346 1770 log.go:172] (0xc00085a210) Reply frame received for 3\nI0205 13:38:44.654372 1770 log.go:172] (0xc00085a210) (0xc0000d8320) Create stream\nI0205 13:38:44.654381 1770 log.go:172] (0xc00085a210) (0xc0000d8320) Stream added, broadcasting: 5\nI0205 13:38:44.655417 1770 log.go:172] (0xc00085a210) Reply frame received for 5\nI0205 13:38:44.806209 1770 log.go:172] (0xc00085a210) Data frame received for 3\nI0205 13:38:44.806493 1770 log.go:172] (0xc0004b2000) (3) Data frame handling\nI0205 13:38:44.806532 1770 log.go:172] (0xc0004b2000) (3) Data frame sent\nI0205 13:38:44.806836 1770 log.go:172] (0xc00085a210) Data frame received for 5\nI0205 13:38:44.806891 1770 log.go:172] (0xc0000d8320) (5) Data frame handling\nI0205 13:38:44.806926 1770 log.go:172] (0xc0000d8320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 13:38:44.890984 1770 log.go:172] (0xc00085a210) Data frame received for 1\nI0205 13:38:44.891118 1770 log.go:172] (0xc00085a210) (0xc0004b2000) Stream removed, broadcasting: 3\nI0205 13:38:44.891143 1770 log.go:172] (0xc000594640) (1) Data frame handling\nI0205 13:38:44.891151 1770 log.go:172] (0xc000594640) (1) Data frame sent\nI0205 13:38:44.891158 1770 log.go:172] (0xc00085a210) (0xc000594640) Stream removed, broadcasting: 1\nI0205 13:38:44.891165 1770 log.go:172] (0xc00085a210) (0xc0000d8320) Stream removed, broadcasting: 5\nI0205 13:38:44.891179 1770 log.go:172] (0xc00085a210) Go away received\nI0205 13:38:44.891429 1770 log.go:172] (0xc00085a210) (0xc000594640) Stream removed, broadcasting: 1\nI0205 13:38:44.891442 1770 log.go:172] (0xc00085a210) (0xc0004b2000) Stream removed, broadcasting: 3\nI0205 13:38:44.891449 1770 log.go:172] (0xc00085a210) (0xc0000d8320) Stream removed, broadcasting: 5\n" Feb 5 13:38:44.897: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 5 13:38:44.897: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 5 13:38:54.935: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:38:54.935: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:38:54.935: INFO: Waiting for Pod statefulset-9528/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:38:54.935: INFO: Waiting for Pod statefulset-9528/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:39:04.975: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:39:04.975: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:39:04.975: INFO: Waiting for Pod statefulset-9528/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:39:14.954: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update Feb 5 13:39:14.955: INFO: Waiting for Pod statefulset-9528/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 5 13:39:24.952: INFO: Waiting for StatefulSet statefulset-9528/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 5 13:39:34.956: INFO: Deleting all statefulset in ns statefulset-9528 Feb 5 13:39:34.964: INFO: Scaling statefulset ss2 to 0 Feb 5 13:40:04.993: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 13:40:04.996: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:40:05.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9528" for this suite. Feb 5 13:40:13.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:40:13.247: INFO: namespace statefulset-9528 deletion completed in 8.209736996s • [SLOW TEST:220.893 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:40:13.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2274 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 5 13:40:13.330: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 5 13:40:51.539: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2274 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:40:51.539: INFO: >>> kubeConfig: /root/.kube/config I0205 13:40:51.621008 8 log.go:172] (0xc000e96370) (0xc001fb4820) Create stream I0205 13:40:51.621051 8 log.go:172] (0xc000e96370) (0xc001fb4820) Stream added, broadcasting: 1 I0205 13:40:51.629053 8 log.go:172] (0xc000e96370) Reply frame received for 1 I0205 13:40:51.629099 8 log.go:172] (0xc000e96370) (0xc0000ff2c0) Create stream I0205 13:40:51.629113 8 log.go:172] (0xc000e96370) (0xc0000ff2c0) Stream added, broadcasting: 3 I0205 13:40:51.631962 8 log.go:172] (0xc000e96370) Reply frame received for 3 I0205 13:40:51.632015 8 log.go:172] (0xc000e96370) (0xc001fb48c0) Create stream I0205 13:40:51.632024 8 log.go:172] (0xc000e96370) (0xc001fb48c0) Stream added, broadcasting: 5 I0205 13:40:51.634020 8 log.go:172] (0xc000e96370) Reply frame received for 5 I0205 13:40:51.765978 8 log.go:172] (0xc000e96370) Data frame received for 3 I0205 13:40:51.766086 8 log.go:172] (0xc0000ff2c0) (3) Data frame handling I0205 13:40:51.766101 8 log.go:172] (0xc0000ff2c0) (3) Data frame sent I0205 13:40:51.911313 8 log.go:172] (0xc000e96370) (0xc0000ff2c0) Stream removed, broadcasting: 3 I0205 13:40:51.911604 8 log.go:172] (0xc000e96370) Data frame received for 1 I0205 13:40:51.911626 8 log.go:172] (0xc001fb4820) (1) Data frame handling I0205 13:40:51.911665 8 log.go:172] (0xc001fb4820) (1) Data frame sent I0205 13:40:51.911920 8 log.go:172] (0xc000e96370) (0xc001fb4820) Stream removed, broadcasting: 1 I0205 13:40:51.912164 8 log.go:172] (0xc000e96370) (0xc001fb48c0) Stream removed, broadcasting: 5 I0205 13:40:51.912271 8 log.go:172] (0xc000e96370) (0xc001fb4820) Stream removed, broadcasting: 1 I0205 13:40:51.912295 8 log.go:172] (0xc000e96370) (0xc0000ff2c0) Stream removed, broadcasting: 3 I0205 13:40:51.912304 8 log.go:172] (0xc000e96370) (0xc001fb48c0) Stream removed, broadcasting: 5 I0205 13:40:51.912951 8 log.go:172] (0xc000e96370) Go away received Feb 5 13:40:51.913: INFO: Found all expected endpoints: [netserver-0] Feb 5 13:40:51.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2274 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 13:40:51.924: INFO: >>> kubeConfig: /root/.kube/config I0205 13:40:51.982955 8 log.go:172] (0xc000f98630) (0xc001ae17c0) Create stream I0205 13:40:51.982999 8 log.go:172] (0xc000f98630) (0xc001ae17c0) Stream added, broadcasting: 1 I0205 13:40:51.989748 8 log.go:172] (0xc000f98630) Reply frame received for 1 I0205 13:40:51.989776 8 log.go:172] (0xc000f98630) (0xc001ae1860) Create stream I0205 13:40:51.989781 8 log.go:172] (0xc000f98630) (0xc001ae1860) Stream added, broadcasting: 3 I0205 13:40:51.991727 8 log.go:172] (0xc000f98630) Reply frame received for 3 I0205 13:40:51.991766 8 log.go:172] (0xc000f98630) (0xc000c20640) Create stream I0205 13:40:51.991779 8 log.go:172] (0xc000f98630) (0xc000c20640) Stream added, broadcasting: 5 I0205 13:40:51.994585 8 log.go:172] (0xc000f98630) Reply frame received for 5 I0205 13:40:52.109273 8 log.go:172] (0xc000f98630) Data frame received for 3 I0205 13:40:52.109314 8 log.go:172] (0xc001ae1860) (3) Data frame handling I0205 13:40:52.109326 8 log.go:172] (0xc001ae1860) (3) Data frame sent I0205 13:40:52.286968 8 log.go:172] (0xc000f98630) Data frame received for 1 I0205 13:40:52.287174 8 log.go:172] (0xc000f98630) (0xc001ae1860) Stream removed, broadcasting: 3 I0205 13:40:52.287541 8 log.go:172] (0xc001ae17c0) (1) Data frame handling I0205 13:40:52.287637 8 log.go:172] (0xc001ae17c0) (1) Data frame sent I0205 13:40:52.287697 8 log.go:172] (0xc000f98630) (0xc000c20640) Stream removed, broadcasting: 5 I0205 13:40:52.287741 8 log.go:172] (0xc000f98630) (0xc001ae17c0) Stream removed, broadcasting: 1 I0205 13:40:52.287802 8 log.go:172] (0xc000f98630) Go away received I0205 13:40:52.288103 8 log.go:172] (0xc000f98630) (0xc001ae17c0) Stream removed, broadcasting: 1 I0205 13:40:52.288178 8 log.go:172] (0xc000f98630) (0xc001ae1860) Stream removed, broadcasting: 3 I0205 13:40:52.288192 8 log.go:172] (0xc000f98630) (0xc000c20640) Stream removed, broadcasting: 5 Feb 5 13:40:52.288: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:40:52.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2274" for this suite. Feb 5 13:41:16.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:41:16.474: INFO: namespace pod-network-test-2274 deletion completed in 24.175236388s • [SLOW TEST:63.226 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:41:16.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 5 13:41:16.581: INFO: Waiting up to 5m0s for pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff" in namespace "containers-6175" to be "success or failure" Feb 5 13:41:16.606: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff": Phase="Pending", Reason="", readiness=false. Elapsed: 25.431036ms Feb 5 13:41:18.622: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0410299s Feb 5 13:41:20.633: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052467808s Feb 5 13:41:22.650: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069176594s Feb 5 13:41:24.658: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076576015s STEP: Saw pod success Feb 5 13:41:24.658: INFO: Pod "client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff" satisfied condition "success or failure" Feb 5 13:41:24.661: INFO: Trying to get logs from node iruya-node pod client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff container test-container: STEP: delete the pod Feb 5 13:41:24.732: INFO: Waiting for pod client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff to disappear Feb 5 13:41:24.746: INFO: Pod client-containers-f9fd54fd-3044-4ee3-9289-d223aaad8eff no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:41:24.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6175" for this suite. Feb 5 13:41:30.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:41:30.919: INFO: namespace containers-6175 deletion completed in 6.16632763s • [SLOW TEST:14.444 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:41:30.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0205 13:41:46.209851 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 13:41:46.209: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:41:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2763" for this suite. Feb 5 13:42:02.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:42:02.289: INFO: namespace gc-2763 deletion completed in 15.865629503s • [SLOW TEST:31.370 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:42:02.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5368/configmap-test-fcae1ea4-86c3-45bd-b51c-7c72b05fe783 STEP: Creating a pod to test consume configMaps Feb 5 13:42:02.547: INFO: Waiting up to 5m0s for pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad" in namespace "configmap-5368" to be "success or failure" Feb 5 13:42:02.587: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Pending", Reason="", readiness=false. Elapsed: 39.740134ms Feb 5 13:42:04.609: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06193874s Feb 5 13:42:06.619: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07179365s Feb 5 13:42:09.902: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.355218804s Feb 5 13:42:11.915: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Pending", Reason="", readiness=false. Elapsed: 9.368181909s Feb 5 13:42:13.932: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.384334038s STEP: Saw pod success Feb 5 13:42:13.932: INFO: Pod "pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad" satisfied condition "success or failure" Feb 5 13:42:13.937: INFO: Trying to get logs from node iruya-node pod pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad container env-test: STEP: delete the pod Feb 5 13:42:14.075: INFO: Waiting for pod pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad to disappear Feb 5 13:42:14.085: INFO: Pod pod-configmaps-14a7548b-2a1a-4328-8aca-bfcab56082ad no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:42:14.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5368" for this suite. Feb 5 13:42:20.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:42:20.251: INFO: namespace configmap-5368 deletion completed in 6.15716824s • [SLOW TEST:17.960 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:42:20.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 5 13:42:20.413: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:42:20.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-996" for this suite. Feb 5 13:42:26.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:42:26.732: INFO: namespace kubectl-996 deletion completed in 6.168290291s • [SLOW TEST:6.481 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:42:26.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:42:26.875: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 5 13:42:26.942: INFO: Number of nodes with available pods: 0 Feb 5 13:42:26.943: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 5 13:42:27.020: INFO: Number of nodes with available pods: 0 Feb 5 13:42:27.020: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:28.029: INFO: Number of nodes with available pods: 0 Feb 5 13:42:28.029: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:29.032: INFO: Number of nodes with available pods: 0 Feb 5 13:42:29.032: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:30.030: INFO: Number of nodes with available pods: 0 Feb 5 13:42:30.030: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:31.036: INFO: Number of nodes with available pods: 0 Feb 5 13:42:31.036: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:32.631: INFO: Number of nodes with available pods: 0 Feb 5 13:42:32.632: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:33.223: INFO: Number of nodes with available pods: 0 Feb 5 13:42:33.223: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:34.045: INFO: Number of nodes with available pods: 1 Feb 5 13:42:34.045: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 5 13:42:34.152: INFO: Number of nodes with available pods: 1 Feb 5 13:42:34.152: INFO: Number of running nodes: 0, number of available pods: 1 Feb 5 13:42:35.160: INFO: Number of nodes with available pods: 0 Feb 5 13:42:35.160: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 5 13:42:35.180: INFO: Number of nodes with available pods: 0 Feb 5 13:42:35.180: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:36.188: INFO: Number of nodes with available pods: 0 Feb 5 13:42:36.188: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:37.191: INFO: Number of nodes with available pods: 0 Feb 5 13:42:37.191: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:38.187: INFO: Number of nodes with available pods: 0 Feb 5 13:42:38.187: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:39.185: INFO: Number of nodes with available pods: 0 Feb 5 13:42:39.185: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:40.189: INFO: Number of nodes with available pods: 0 Feb 5 13:42:40.189: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:41.194: INFO: Number of nodes with available pods: 0 Feb 5 13:42:41.194: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:42.186: INFO: Number of nodes with available pods: 0 Feb 5 13:42:42.187: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:43.188: INFO: Number of nodes with available pods: 0 Feb 5 13:42:43.188: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:44.189: INFO: Number of nodes with available pods: 0 Feb 5 13:42:44.190: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:45.200: INFO: Number of nodes with available pods: 0 Feb 5 13:42:45.200: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:46.188: INFO: Number of nodes with available pods: 0 Feb 5 13:42:46.188: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:47.186: INFO: Number of nodes with available pods: 0 Feb 5 13:42:47.186: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:48.190: INFO: Number of nodes with available pods: 0 Feb 5 13:42:48.190: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:42:49.188: INFO: Number of nodes with available pods: 1 Feb 5 13:42:49.188: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-697, will wait for the garbage collector to delete the pods Feb 5 13:42:49.263: INFO: Deleting DaemonSet.extensions daemon-set took: 16.245157ms Feb 5 13:42:49.564: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.399721ms Feb 5 13:43:06.572: INFO: Number of nodes with available pods: 0 Feb 5 13:43:06.572: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 13:43:06.576: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-697/daemonsets","resourceVersion":"23196194"},"items":null} Feb 5 13:43:06.580: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-697/pods","resourceVersion":"23196194"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:43:06.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-697" for this suite. Feb 5 13:43:12.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:43:12.823: INFO: namespace daemonsets-697 deletion completed in 6.161058727s • [SLOW TEST:46.090 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:43:12.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Feb 5 13:43:13.028: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2851" to be "success or failure" Feb 5 13:43:13.040: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.003422ms Feb 5 13:43:15.054: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025760347s Feb 5 13:43:17.063: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034878112s Feb 5 13:43:19.071: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043051402s Feb 5 13:43:21.095: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 8.067132755s Feb 5 13:43:23.103: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075132604s STEP: Saw pod success Feb 5 13:43:23.103: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 5 13:43:23.109: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 5 13:43:23.432: INFO: Waiting for pod pod-host-path-test to disappear Feb 5 13:43:23.443: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:43:23.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2851" for this suite. Feb 5 13:43:30.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:43:30.290: INFO: namespace hostpath-2851 deletion completed in 6.836381256s • [SLOW TEST:17.466 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:43:30.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 5 13:43:48.658: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:48.668: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:43:50.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:50.677: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:43:52.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:52.683: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:43:54.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:54.681: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:43:56.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:56.685: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:43:58.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:43:58.676: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:00.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:01.224: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:02.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:02.678: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:04.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:04.691: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:06.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:06.681: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:08.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:08.679: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:10.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:10.680: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:12.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:12.686: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:14.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:14.679: INFO: Pod pod-with-prestop-exec-hook still exists Feb 5 13:44:16.669: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 5 13:44:16.705: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:44:16.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8940" for this suite. Feb 5 13:44:38.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:44:38.878: INFO: namespace container-lifecycle-hook-8940 deletion completed in 22.132705332s • [SLOW TEST:68.587 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:44:38.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:44:39.022: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 5 13:44:44.045: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 5 13:44:48.063: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 5 13:44:48.103: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-6006,SelfLink:/apis/apps/v1/namespaces/deployment-6006/deployments/test-cleanup-deployment,UID:894a8f9c-1b16-42bd-99c4-fe8f156de4a5,ResourceVersion:23196437,Generation:1,CreationTimestamp:2020-02-05 13:44:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 5 13:44:48.128: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-6006,SelfLink:/apis/apps/v1/namespaces/deployment-6006/replicasets/test-cleanup-deployment-55bbcbc84c,UID:393b938c-08d2-4340-b310-7a89f6c26855,ResourceVersion:23196439,Generation:1,CreationTimestamp:2020-02-05 13:44:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 894a8f9c-1b16-42bd-99c4-fe8f156de4a5 0xc0030594f7 0xc0030594f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 5 13:44:48.128: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 5 13:44:48.129: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-6006,SelfLink:/apis/apps/v1/namespaces/deployment-6006/replicasets/test-cleanup-controller,UID:3e0ee118-0af2-4aeb-9d40-da2aa507ba77,ResourceVersion:23196438,Generation:1,CreationTimestamp:2020-02-05 13:44:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 894a8f9c-1b16-42bd-99c4-fe8f156de4a5 0xc003059427 0xc003059428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 5 13:44:48.204: INFO: Pod "test-cleanup-controller-wph5m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-wph5m,GenerateName:test-cleanup-controller-,Namespace:deployment-6006,SelfLink:/api/v1/namespaces/deployment-6006/pods/test-cleanup-controller-wph5m,UID:afff411b-ca9f-44a7-b56a-9c15c71ad91e,ResourceVersion:23196434,Generation:0,CreationTimestamp:2020-02-05 13:44:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 3e0ee118-0af2-4aeb-9d40-da2aa507ba77 0xc0004e1217 0xc0004e1218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q4whn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q4whn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-q4whn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004e13e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004e1410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:44:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:44:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:44:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:44:39 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-05 13:44:39 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:44:45 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0f5096347e9b35043b3817cdee351e2335fe85db0cd2c64261440e6b6bc656a3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:44:48.205: INFO: Pod "test-cleanup-deployment-55bbcbc84c-zjwt4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-zjwt4,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-6006,SelfLink:/api/v1/namespaces/deployment-6006/pods/test-cleanup-deployment-55bbcbc84c-zjwt4,UID:c2d5f5aa-73b9-4e77-9270-cc82865cc6da,ResourceVersion:23196442,Generation:0,CreationTimestamp:2020-02-05 13:44:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 393b938c-08d2-4340-b310-7a89f6c26855 0xc0004e1617 0xc0004e1618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-q4whn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q4whn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q4whn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004e1710} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004e1780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:44:48.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6006" for this suite. Feb 5 13:44:54.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:44:54.547: INFO: namespace deployment-6006 deletion completed in 6.287451568s • [SLOW TEST:15.669 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:44:54.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0205 13:45:36.228656 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 13:45:36.228: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:45:36.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9867" for this suite. Feb 5 13:45:45.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:45:46.901: INFO: namespace gc-9867 deletion completed in 10.664906546s • [SLOW TEST:52.353 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:45:46.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:45:47.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd" in namespace "downward-api-8011" to be "success or failure" Feb 5 13:45:47.739: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 219.135707ms Feb 5 13:45:49.756: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236038238s Feb 5 13:45:51.767: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247408378s Feb 5 13:45:53.791: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271477816s Feb 5 13:45:55.797: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277925661s Feb 5 13:45:57.810: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.290088702s Feb 5 13:45:59.817: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.297249405s Feb 5 13:46:01.833: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Running", Reason="", readiness=true. Elapsed: 14.313462659s Feb 5 13:46:03.849: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.329183575s STEP: Saw pod success Feb 5 13:46:03.849: INFO: Pod "downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd" satisfied condition "success or failure" Feb 5 13:46:03.861: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd container client-container: STEP: delete the pod Feb 5 13:46:03.964: INFO: Waiting for pod downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd to disappear Feb 5 13:46:03.974: INFO: Pod downwardapi-volume-dc510ca9-f574-4ee0-8eab-9601bb5d66fd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:46:03.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8011" for this suite. Feb 5 13:46:10.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:46:10.172: INFO: namespace downward-api-8011 deletion completed in 6.192256432s • [SLOW TEST:23.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:46:10.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:46:10.227: INFO: Creating deployment "nginx-deployment" Feb 5 13:46:10.243: INFO: Waiting for observed generation 1 Feb 5 13:46:14.238: INFO: Waiting for all required pods to come up Feb 5 13:46:14.504: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 5 13:46:46.530: INFO: Waiting for deployment "nginx-deployment" to complete Feb 5 13:46:46.541: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 5 13:46:46.553: INFO: Updating deployment nginx-deployment Feb 5 13:46:46.553: INFO: Waiting for observed generation 2 Feb 5 13:46:51.085: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 5 13:46:52.765: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 5 13:46:52.879: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 5 13:46:55.082: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 5 13:46:55.082: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 5 13:46:55.172: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 5 13:46:55.960: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 5 13:46:55.960: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 5 13:46:55.982: INFO: Updating deployment nginx-deployment Feb 5 13:46:55.982: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 5 13:46:56.558: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 5 13:46:57.446: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 5 13:46:59.317: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-5246,SelfLink:/apis/apps/v1/namespaces/deployment-5246/deployments/nginx-deployment,UID:0cb83f0b-48a0-4284-b560-8970da0df06d,ResourceVersion:23197014,Generation:3,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-05 13:46:51 +0000 UTC 2020-02-05 13:46:10 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-05 13:46:56 +0000 UTC 2020-02-05 13:46:56 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 5 13:47:00.895: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-5246,SelfLink:/apis/apps/v1/namespaces/deployment-5246/replicasets/nginx-deployment-55fb7cb77f,UID:1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe,ResourceVersion:23197064,Generation:3,CreationTimestamp:2020-02-05 13:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0cb83f0b-48a0-4284-b560-8970da0df06d 0xc001cde037 0xc001cde038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 5 13:47:00.895: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 5 13:47:00.895: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-5246,SelfLink:/apis/apps/v1/namespaces/deployment-5246/replicasets/nginx-deployment-7b8c6f4498,UID:2df8efa2-92f6-47a1-8841-d307dfeb1457,ResourceVersion:23197057,Generation:3,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0cb83f0b-48a0-4284-b560-8970da0df06d 0xc001cde117 0xc001cde118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 5 13:47:03.031: INFO: Pod "nginx-deployment-55fb7cb77f-6tw49" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6tw49,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-6tw49,UID:5485cb91-0d50-4337-a2ac-1a79b4db59a4,ResourceVersion:23197034,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdea77 0xc001cdea78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdeaf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdeb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.031: INFO: Pod "nginx-deployment-55fb7cb77f-8sb56" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8sb56,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-8sb56,UID:f0850b9c-5bce-4eef-ad23-0b10b31c0148,ResourceVersion:23197047,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdeb97 0xc001cdeb98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdec00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdec20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.031: INFO: Pod "nginx-deployment-55fb7cb77f-bkvcf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bkvcf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-bkvcf,UID:0f7f9c1f-9a39-4617-b406-7067da4a080d,ResourceVersion:23197016,Generation:0,CreationTimestamp:2020-02-05 13:46:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdeca7 0xc001cdeca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cded10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cded30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.031: INFO: Pod "nginx-deployment-55fb7cb77f-fv9vx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fv9vx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-fv9vx,UID:2697096d-c0bb-428b-8ea6-e68168fafe33,ResourceVersion:23197061,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdedb7 0xc001cdedb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdee20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdee40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-j4xwq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j4xwq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-j4xwq,UID:3463cfc2-9b55-4c87-b08d-b37be4827454,ResourceVersion:23196997,Generation:0,CreationTimestamp:2020-02-05 13:46:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdeec7 0xc001cdeec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdef40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdef60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:46:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-mjs5d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mjs5d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-mjs5d,UID:6829d6b1-38fc-422a-a334-5541dea99ee9,ResourceVersion:23197043,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf037 0xc001cdf038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-mxx9k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mxx9k,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-mxx9k,UID:744b498e-496e-4841-bcdc-971e0d8da886,ResourceVersion:23196961,Generation:0,CreationTimestamp:2020-02-05 13:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf157 0xc001cdf158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:46:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-nzwcc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nzwcc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-nzwcc,UID:ab59af68-c73d-4aa2-a5ba-afecff2829f3,ResourceVersion:23196998,Generation:0,CreationTimestamp:2020-02-05 13:46:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf2c7 0xc001cdf2c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-05 13:46:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-p6cws" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p6cws,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-p6cws,UID:edd7b81a-dd4b-4282-8419-b92025af10c1,ResourceVersion:23196966,Generation:0,CreationTimestamp:2020-02-05 13:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf427 0xc001cdf428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf490} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf4b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-05 13:46:46 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.032: INFO: Pod "nginx-deployment-55fb7cb77f-q75zn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q75zn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-q75zn,UID:57b25ed8-9b0d-4cc7-965d-d248ff34815f,ResourceVersion:23197052,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf587 0xc001cdf588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf600} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-55fb7cb77f-r97pd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r97pd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-r97pd,UID:bbdf6585-fe73-4d16-a61e-645aaca4834c,ResourceVersion:23197053,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf6b7 0xc001cdf6b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf720} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-55fb7cb77f-tnzhc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tnzhc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-tnzhc,UID:09c8d293-1961-4d16-b365-3b8bccd13df6,ResourceVersion:23197036,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf7c7 0xc001cdf7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-55fb7cb77f-w9s94" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w9s94,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-55fb7cb77f-w9s94,UID:0fe9c24f-9e86-4b24-84b8-878c9cf85ab4,ResourceVersion:23196972,Generation:0,CreationTimestamp:2020-02-05 13:46:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 1c1a97d7-5d38-4052-9bd9-6b0e606cd8fe 0xc001cdf8e7 0xc001cdf8e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdf960} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdf980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:46:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-7b8c6f4498-4cn9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4cn9x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-4cn9x,UID:73ec212b-2f2d-4762-8bcb-21097e020f5b,ResourceVersion:23196926,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc001cdfa57 0xc001cdfa58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdfac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdfae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-05 13:46:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fc946e5e8994a8e61e6cfc67d318e6cadbf9077fb02aaeee15c41d675ba191b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-7b8c6f4498-5zkzn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5zkzn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-5zkzn,UID:72142b84-b83e-42f1-88eb-22df232b97a0,ResourceVersion:23197054,Generation:0,CreationTimestamp:2020-02-05 13:46:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc001cdfbb7 0xc001cdfbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdfc30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdfc50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:46:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.033: INFO: Pod "nginx-deployment-7b8c6f4498-62ztf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-62ztf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-62ztf,UID:029baa44-66d5-4356-9074-06c94b4d1376,ResourceVersion:23197015,Generation:0,CreationTimestamp:2020-02-05 13:46:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc001cdfd17 0xc001cdfd18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdfd80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdfda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.034: INFO: Pod "nginx-deployment-7b8c6f4498-8vmzq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vmzq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-8vmzq,UID:4c37db7e-773f-4752-b286-89a57250fa1d,ResourceVersion:23196894,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc001cdfe27 0xc001cdfe28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cdfea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cdfec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-05 13:46:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://90e83927f6e074001ff1c0a6a5b658bf73f242c62ea5a8cb66cb10e5d9440d5b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.034: INFO: Pod "nginx-deployment-7b8c6f4498-9qzwh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9qzwh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-9qzwh,UID:a4f1af7d-02bc-4a60-8cca-5b8260cef3e8,ResourceVersion:23196910,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc001cdff97 0xc001cdff98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-05 13:46:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4ecc2ebf7f4ab15afdd714d1529f392f4f825abc9a63d6cdc0bf72906459f72f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.034: INFO: Pod "nginx-deployment-7b8c6f4498-9zszd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9zszd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-9zszd,UID:92e15220-fcc7-4626-b97c-4bb57b77bcaa,ResourceVersion:23196915,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510107 0xc002510108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025101a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-05 13:46:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1e9e32e52d0db30c000a0a73ae7319e14331a6db06845058e5ee88a5380b4f83}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.034: INFO: Pod "nginx-deployment-7b8c6f4498-b8ctt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b8ctt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-b8ctt,UID:57d8bf7a-1c2f-4426-a02c-0efe7f7c623b,ResourceVersion:23197031,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510287 0xc002510288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510300} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.034: INFO: Pod "nginx-deployment-7b8c6f4498-b99sg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b99sg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-b99sg,UID:ff71299e-e75e-4684-a5eb-ffd59e17fd51,ResourceVersion:23197030,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc0025103a7 0xc0025103a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510410} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-brxdd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-brxdd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-brxdd,UID:e399aae7-97b1-4543-ac07-cad58ecc206e,ResourceVersion:23197069,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc0025104b7 0xc0025104b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510530} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-05 13:46:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-bvh9r" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bvh9r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-bvh9r,UID:6af89efd-b09c-42a7-b909-a6d703997a50,ResourceVersion:23196898,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510617 0xc002510618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025106b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-05 13:46:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:40 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e8de7ef39b29f7fdac6914d3198803c860c60cb9a2dac400c1e881a085bf9414}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-dqxk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dqxk5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-dqxk5,UID:7b958e55-2e36-4c5a-956f-cadacc71a832,ResourceVersion:23197046,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510787 0xc002510788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025107f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-l24gc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l24gc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-l24gc,UID:5ec4a181-eb97-4715-9d91-c3296fa68ccb,ResourceVersion:23197058,Generation:0,CreationTimestamp:2020-02-05 13:46:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510897 0xc002510898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510900} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:56 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-05 13:46:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-n48pc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n48pc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-n48pc,UID:588ca220-ba44-457d-a388-45b3a39b6edf,ResourceVersion:23197033,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc0025109e7 0xc0025109e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.035: INFO: Pod "nginx-deployment-7b8c6f4498-rjdh9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rjdh9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-rjdh9,UID:d3385939-2247-4065-967d-c06a384523e5,ResourceVersion:23197044,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510af7 0xc002510af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510b70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510b90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-s4pzg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s4pzg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-s4pzg,UID:ef2a2908-ef0e-4328-9708-cd4f26d0e336,ResourceVersion:23197048,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510c17 0xc002510c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510c80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-s5j8b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s5j8b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-s5j8b,UID:61643973-1727-4bac-9bcc-cece20fb3d8f,ResourceVersion:23196933,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510d27 0xc002510d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-05 13:46:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:38 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://abf2f1c09354246eb3c7a01edd552ccb03d9f2ce3fd059c81f369bcf66247876}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-vc757" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vc757,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-vc757,UID:26a9af08-b11c-4da7-9728-1f6db9510cac,ResourceVersion:23196922,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002510e87 0xc002510e88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002510f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002510f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-05 13:46:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:43 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0e31ca50aebc409fe84c5c93f47a0081a28d8e7b7d2e5122c3b2c77ddee08bf6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-vg79n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vg79n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-vg79n,UID:5e2131e5-f62c-49b4-b4d1-bc1dc2fda512,ResourceVersion:23197045,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002511037 0xc002511038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025110d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025110f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-xwmxr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xwmxr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-xwmxr,UID:849eb264-c20d-4f4a-9f2d-eb0467e65636,ResourceVersion:23197051,Generation:0,CreationTimestamp:2020-02-05 13:46:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002511187 0xc002511188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025111f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002511210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 5 13:47:03.036: INFO: Pod "nginx-deployment-7b8c6f4498-xwt2h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xwt2h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-5246,SelfLink:/api/v1/namespaces/deployment-5246/pods/nginx-deployment-7b8c6f4498-xwt2h,UID:e50fbaf3-c945-4f74-8df7-24c8a5c8c059,ResourceVersion:23196906,Generation:0,CreationTimestamp:2020-02-05 13:46:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2df8efa2-92f6-47a1-8841-d307dfeb1457 0xc002511297 0xc002511298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-szv5s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szv5s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szv5s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002511310} {node.kubernetes.io/unreachable Exists NoExecute 0xc002511360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:46:10 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-05 13:46:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-05 13:46:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://9736f98c8382316082903f1d9a977f3e31a8a55415591031d6d63984c2013ac7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:47:03.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5246" for this suite. Feb 5 13:48:05.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:48:06.062: INFO: namespace deployment-5246 deletion completed in 1m1.021702174s • [SLOW TEST:115.889 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:48:06.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 5 13:48:06.171: INFO: Waiting up to 5m0s for pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b" in namespace "emptydir-851" to be "success or failure" Feb 5 13:48:06.192: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.108744ms Feb 5 13:48:08.205: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034438817s Feb 5 13:48:10.215: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043866456s Feb 5 13:48:12.226: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054779135s Feb 5 13:48:14.244: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073464908s Feb 5 13:48:16.253: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082406394s STEP: Saw pod success Feb 5 13:48:16.253: INFO: Pod "pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b" satisfied condition "success or failure" Feb 5 13:48:16.259: INFO: Trying to get logs from node iruya-node pod pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b container test-container: STEP: delete the pod Feb 5 13:48:16.371: INFO: Waiting for pod pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b to disappear Feb 5 13:48:16.509: INFO: Pod pod-c97d78e2-1d32-4ff4-ba25-1476b26c0c3b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:48:16.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-851" for this suite. Feb 5 13:48:22.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:48:22.662: INFO: namespace emptydir-851 deletion completed in 6.145287439s • [SLOW TEST:16.600 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:48:22.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 5 13:48:22.756: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49" in namespace "downward-api-6927" to be "success or failure" Feb 5 13:48:22.784: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49": Phase="Pending", Reason="", readiness=false. Elapsed: 27.296372ms Feb 5 13:48:24.790: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033371286s Feb 5 13:48:26.794: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038052734s Feb 5 13:48:28.808: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051793162s Feb 5 13:48:30.835: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078564993s STEP: Saw pod success Feb 5 13:48:30.835: INFO: Pod "downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49" satisfied condition "success or failure" Feb 5 13:48:30.845: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49 container client-container: STEP: delete the pod Feb 5 13:48:31.118: INFO: Waiting for pod downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49 to disappear Feb 5 13:48:31.122: INFO: Pod downwardapi-volume-e4e04ee3-946b-4595-9c5c-a7fbe71c0e49 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:48:31.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6927" for this suite. Feb 5 13:48:37.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:48:37.288: INFO: namespace downward-api-6927 deletion completed in 6.161254112s • [SLOW TEST:14.626 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:48:37.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-89adc3fd-7e77-4ec6-b983-60623e26a0a1 STEP: Creating secret with name s-test-opt-upd-a727208c-6f9f-4d6a-bb05-7bcd4c512c84 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-89adc3fd-7e77-4ec6-b983-60623e26a0a1 STEP: Updating secret s-test-opt-upd-a727208c-6f9f-4d6a-bb05-7bcd4c512c84 STEP: Creating secret with name s-test-opt-create-c8898ef3-15c6-4316-b2ce-9c768c0779ee STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:50:19.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9277" for this suite. Feb 5 13:50:42.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:50:42.194: INFO: namespace projected-9277 deletion completed in 22.2342902s • [SLOW TEST:124.906 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:50:42.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 5 13:50:42.366: INFO: Number of nodes with available pods: 0 Feb 5 13:50:42.366: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:43.381: INFO: Number of nodes with available pods: 0 Feb 5 13:50:43.381: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:45.030: INFO: Number of nodes with available pods: 0 Feb 5 13:50:45.030: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:45.387: INFO: Number of nodes with available pods: 0 Feb 5 13:50:45.387: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:46.381: INFO: Number of nodes with available pods: 0 Feb 5 13:50:46.381: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:49.206: INFO: Number of nodes with available pods: 0 Feb 5 13:50:49.207: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:50.592: INFO: Number of nodes with available pods: 0 Feb 5 13:50:50.592: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:51.389: INFO: Number of nodes with available pods: 0 Feb 5 13:50:51.389: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:52.380: INFO: Number of nodes with available pods: 1 Feb 5 13:50:52.380: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:53.383: INFO: Number of nodes with available pods: 2 Feb 5 13:50:53.383: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 5 13:50:53.444: INFO: Number of nodes with available pods: 1 Feb 5 13:50:53.444: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:54.466: INFO: Number of nodes with available pods: 1 Feb 5 13:50:54.466: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:55.458: INFO: Number of nodes with available pods: 1 Feb 5 13:50:55.458: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:56.465: INFO: Number of nodes with available pods: 1 Feb 5 13:50:56.465: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:57.462: INFO: Number of nodes with available pods: 1 Feb 5 13:50:57.462: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:58.465: INFO: Number of nodes with available pods: 1 Feb 5 13:50:58.465: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:50:59.458: INFO: Number of nodes with available pods: 1 Feb 5 13:50:59.458: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:00.462: INFO: Number of nodes with available pods: 1 Feb 5 13:51:00.462: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:01.465: INFO: Number of nodes with available pods: 1 Feb 5 13:51:01.465: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:02.463: INFO: Number of nodes with available pods: 1 Feb 5 13:51:02.463: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:03.465: INFO: Number of nodes with available pods: 1 Feb 5 13:51:03.465: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:04.465: INFO: Number of nodes with available pods: 1 Feb 5 13:51:04.465: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:05.459: INFO: Number of nodes with available pods: 1 Feb 5 13:51:05.459: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:06.473: INFO: Number of nodes with available pods: 1 Feb 5 13:51:06.473: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:07.462: INFO: Number of nodes with available pods: 1 Feb 5 13:51:07.462: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:08.469: INFO: Number of nodes with available pods: 1 Feb 5 13:51:08.469: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:09.466: INFO: Number of nodes with available pods: 1 Feb 5 13:51:09.466: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:10.477: INFO: Number of nodes with available pods: 1 Feb 5 13:51:10.477: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:11.569: INFO: Number of nodes with available pods: 1 Feb 5 13:51:11.569: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:12.464: INFO: Number of nodes with available pods: 1 Feb 5 13:51:12.464: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:13.459: INFO: Number of nodes with available pods: 1 Feb 5 13:51:13.459: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:14.458: INFO: Number of nodes with available pods: 1 Feb 5 13:51:14.458: INFO: Node iruya-node is running more than one daemon pod Feb 5 13:51:15.456: INFO: Number of nodes with available pods: 2 Feb 5 13:51:15.456: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1723, will wait for the garbage collector to delete the pods Feb 5 13:51:15.535: INFO: Deleting DaemonSet.extensions daemon-set took: 21.333989ms Feb 5 13:51:15.836: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.059241ms Feb 5 13:51:27.943: INFO: Number of nodes with available pods: 0 Feb 5 13:51:27.943: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 13:51:27.946: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1723/daemonsets","resourceVersion":"23197745"},"items":null} Feb 5 13:51:27.948: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1723/pods","resourceVersion":"23197745"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:51:27.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1723" for this suite. Feb 5 13:51:33.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:51:34.144: INFO: namespace daemonsets-1723 deletion completed in 6.177941727s • [SLOW TEST:51.951 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:51:34.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3122.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3122.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3122.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3122.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3122.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 13:51:46.349: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.354: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.365: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3122.svc.cluster.local from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.374: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.381: INFO: Unable to read jessie_udp@PodARecord from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.385: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4: the server could not find the requested resource (get pods dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4) Feb 5 13:51:46.385: INFO: Lookups using dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3122.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 5 13:51:51.463: INFO: DNS probes using dns-3122/dns-test-22482bbf-c0c1-4f8a-a59e-c2225702f7a4 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:51:51.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3122" for this suite. Feb 5 13:51:57.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:51:57.755: INFO: namespace dns-3122 deletion completed in 6.182469397s • [SLOW TEST:23.610 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:51:57.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 5 13:51:57.850: INFO: Waiting up to 5m0s for pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1" in namespace "emptydir-2601" to be "success or failure" Feb 5 13:51:57.892: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Pending", Reason="", readiness=false. Elapsed: 42.437712ms Feb 5 13:51:59.899: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049752063s Feb 5 13:52:01.914: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064216953s Feb 5 13:52:03.931: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081650773s Feb 5 13:52:05.967: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116899006s Feb 5 13:52:08.036: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.186285293s STEP: Saw pod success Feb 5 13:52:08.036: INFO: Pod "pod-d795bb3d-9405-45e1-a040-984b31ac34f1" satisfied condition "success or failure" Feb 5 13:52:08.042: INFO: Trying to get logs from node iruya-node pod pod-d795bb3d-9405-45e1-a040-984b31ac34f1 container test-container: STEP: delete the pod Feb 5 13:52:08.088: INFO: Waiting for pod pod-d795bb3d-9405-45e1-a040-984b31ac34f1 to disappear Feb 5 13:52:08.092: INFO: Pod pod-d795bb3d-9405-45e1-a040-984b31ac34f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:52:08.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2601" for this suite. Feb 5 13:52:14.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:52:14.280: INFO: namespace emptydir-2601 deletion completed in 6.183846916s • [SLOW TEST:16.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:52:14.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 5 13:52:23.013: INFO: Successfully updated pod "labelsupdatec1b33d5f-9670-48d1-9e86-1fe62892da6c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:52:27.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8586" for this suite. Feb 5 13:52:49.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:52:49.372: INFO: namespace projected-8586 deletion completed in 22.206461448s • [SLOW TEST:35.092 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:52:49.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 5 13:52:49.416: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:53:05.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9574" for this suite. Feb 5 13:53:11.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:53:11.649: INFO: namespace init-container-9574 deletion completed in 6.158709026s • [SLOW TEST:22.276 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:53:11.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 5 13:53:22.414: INFO: Successfully updated pod "labelsupdateaaef34d0-1046-4d42-a7fc-743ce2557776" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:53:24.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1155" for this suite. Feb 5 13:54:04.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:54:04.649: INFO: namespace downward-api-1155 deletion completed in 40.140432543s • [SLOW TEST:53.000 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:54:04.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-ff5c60b8-3962-4aef-94e4-2894bc3bcc8c [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:54:04.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3230" for this suite. Feb 5 13:54:10.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:54:10.850: INFO: namespace configmap-3230 deletion completed in 6.089099086s • [SLOW TEST:6.201 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:54:10.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 5 13:54:11.003: INFO: Waiting up to 5m0s for pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e" in namespace "emptydir-1944" to be "success or failure" Feb 5 13:54:11.019: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.488699ms Feb 5 13:54:13.027: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024249648s Feb 5 13:54:15.041: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037849029s Feb 5 13:54:17.048: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04492653s Feb 5 13:54:19.057: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053775447s Feb 5 13:54:21.071: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06838798s STEP: Saw pod success Feb 5 13:54:21.072: INFO: Pod "pod-0e110753-9efa-434f-928a-c943bfc3fe3e" satisfied condition "success or failure" Feb 5 13:54:21.077: INFO: Trying to get logs from node iruya-node pod pod-0e110753-9efa-434f-928a-c943bfc3fe3e container test-container: STEP: delete the pod Feb 5 13:54:21.164: INFO: Waiting for pod pod-0e110753-9efa-434f-928a-c943bfc3fe3e to disappear Feb 5 13:54:21.177: INFO: Pod pod-0e110753-9efa-434f-928a-c943bfc3fe3e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 5 13:54:21.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1944" for this suite. Feb 5 13:54:27.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 5 13:54:27.447: INFO: namespace emptydir-1944 deletion completed in 6.263965827s • [SLOW TEST:16.596 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 5 13:54:27.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 5 13:54:27.562: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.563756ms)
Feb  5 13:54:27.599: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 36.797737ms)
Feb  5 13:54:27.609: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.980928ms)
Feb  5 13:54:27.618: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.16912ms)
Feb  5 13:54:27.626: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.793978ms)
Feb  5 13:54:27.648: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.063707ms)
Feb  5 13:54:27.660: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.278284ms)
Feb  5 13:54:27.669: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.212441ms)
Feb  5 13:54:27.674: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.744149ms)
Feb  5 13:54:27.680: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.07696ms)
Feb  5 13:54:27.687: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.693446ms)
Feb  5 13:54:27.694: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.38424ms)
Feb  5 13:54:27.700: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.698107ms)
Feb  5 13:54:27.705: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.556404ms)
Feb  5 13:54:27.713: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.534056ms)
Feb  5 13:54:27.721: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.368825ms)
Feb  5 13:54:27.729: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.106079ms)
Feb  5 13:54:27.738: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.739757ms)
Feb  5 13:54:27.745: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.254536ms)
Feb  5 13:54:27.752: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.336685ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 13:54:27.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5460" for this suite.
Feb  5 13:54:33.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 13:54:33.953: INFO: namespace proxy-5460 deletion completed in 6.195655237s

• [SLOW TEST:6.505 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 13:54:33.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 13:54:34.108: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4" in namespace "projected-9489" to be "success or failure"
Feb  5 13:54:34.135: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 27.024071ms
Feb  5 13:54:36.170: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062078208s
Feb  5 13:54:38.178: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069221437s
Feb  5 13:54:40.291: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182902521s
Feb  5 13:54:42.297: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.188732533s
STEP: Saw pod success
Feb  5 13:54:42.297: INFO: Pod "downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4" satisfied condition "success or failure"
Feb  5 13:54:42.300: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4 container client-container: 
STEP: delete the pod
Feb  5 13:54:42.339: INFO: Waiting for pod downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4 to disappear
Feb  5 13:54:42.344: INFO: Pod downwardapi-volume-8760e94d-67da-4e21-98b4-269456b0d9d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 13:54:42.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9489" for this suite.
Feb  5 13:54:48.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 13:54:48.520: INFO: namespace projected-9489 deletion completed in 6.172912771s

• [SLOW TEST:14.567 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 13:54:48.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  5 13:55:06.763: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:06.773: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:08.774: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:08.789: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:10.774: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:10.954: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:12.774: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:12.781: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:14.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:14.785: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:16.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:16.788: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:18.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:18.781: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:20.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:20.785: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:22.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:22.783: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:24.773: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:24.779: INFO: Pod pod-with-prestop-http-hook still exists
Feb  5 13:55:26.774: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  5 13:55:26.792: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 13:55:26.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4052" for this suite.
Feb  5 13:55:51.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 13:55:51.666: INFO: namespace container-lifecycle-hook-4052 deletion completed in 24.774563051s

• [SLOW TEST:63.145 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 13:55:51.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9981
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-9981
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9981
Feb  5 13:55:51.903: INFO: Found 0 stateful pods, waiting for 1
Feb  5 13:56:01.922: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  5 13:56:01.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 13:56:04.575: INFO: stderr: "I0205 13:56:04.140465    1803 log.go:172] (0xc00070c420) (0xc00072e640) Create stream\nI0205 13:56:04.140542    1803 log.go:172] (0xc00070c420) (0xc00072e640) Stream added, broadcasting: 1\nI0205 13:56:04.146469    1803 log.go:172] (0xc00070c420) Reply frame received for 1\nI0205 13:56:04.146511    1803 log.go:172] (0xc00070c420) (0xc0005ce320) Create stream\nI0205 13:56:04.146520    1803 log.go:172] (0xc00070c420) (0xc0005ce320) Stream added, broadcasting: 3\nI0205 13:56:04.147888    1803 log.go:172] (0xc00070c420) Reply frame received for 3\nI0205 13:56:04.147911    1803 log.go:172] (0xc00070c420) (0xc0006f6000) Create stream\nI0205 13:56:04.147918    1803 log.go:172] (0xc00070c420) (0xc0006f6000) Stream added, broadcasting: 5\nI0205 13:56:04.149169    1803 log.go:172] (0xc00070c420) Reply frame received for 5\nI0205 13:56:04.307941    1803 log.go:172] (0xc00070c420) Data frame received for 5\nI0205 13:56:04.308326    1803 log.go:172] (0xc0006f6000) (5) Data frame handling\nI0205 13:56:04.308368    1803 log.go:172] (0xc0006f6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:56:04.346106    1803 log.go:172] (0xc00070c420) Data frame received for 3\nI0205 13:56:04.346164    1803 log.go:172] (0xc0005ce320) (3) Data frame handling\nI0205 13:56:04.346177    1803 log.go:172] (0xc0005ce320) (3) Data frame sent\nI0205 13:56:04.564824    1803 log.go:172] (0xc00070c420) Data frame received for 1\nI0205 13:56:04.564920    1803 log.go:172] (0xc00070c420) (0xc0005ce320) Stream removed, broadcasting: 3\nI0205 13:56:04.565028    1803 log.go:172] (0xc00072e640) (1) Data frame handling\nI0205 13:56:04.565064    1803 log.go:172] (0xc00072e640) (1) Data frame sent\nI0205 13:56:04.565146    1803 log.go:172] (0xc00070c420) (0xc0006f6000) Stream removed, broadcasting: 5\nI0205 13:56:04.565182    1803 log.go:172] (0xc00070c420) (0xc00072e640) Stream removed, broadcasting: 1\nI0205 13:56:04.565198    1803 log.go:172] (0xc00070c420) Go away received\nI0205 13:56:04.565815    1803 log.go:172] (0xc00070c420) (0xc00072e640) Stream removed, broadcasting: 1\nI0205 13:56:04.565831    1803 log.go:172] (0xc00070c420) (0xc0005ce320) Stream removed, broadcasting: 3\nI0205 13:56:04.565842    1803 log.go:172] (0xc00070c420) (0xc0006f6000) Stream removed, broadcasting: 5\n"
Feb  5 13:56:04.575: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 13:56:04.575: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 13:56:04.589: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  5 13:56:14.606: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 13:56:14.607: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 13:56:14.654: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  5 13:56:14.654: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:05 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:14.655: INFO: ss-1              Pending         []
Feb  5 13:56:14.655: INFO: 
Feb  5 13:56:14.655: INFO: StatefulSet ss has not reached scale 3, at 2
Feb  5 13:56:16.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.967785031s
Feb  5 13:56:17.718: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.137456258s
Feb  5 13:56:18.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.90388835s
Feb  5 13:56:19.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888009637s
Feb  5 13:56:21.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.880536322s
Feb  5 13:56:22.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.227516063s
Feb  5 13:56:23.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 791.65648ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9981
Feb  5 13:56:24.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:56:25.354: INFO: stderr: "I0205 13:56:25.116289    1833 log.go:172] (0xc0009dc370) (0xc0009ca640) Create stream\nI0205 13:56:25.116485    1833 log.go:172] (0xc0009dc370) (0xc0009ca640) Stream added, broadcasting: 1\nI0205 13:56:25.124032    1833 log.go:172] (0xc0009dc370) Reply frame received for 1\nI0205 13:56:25.124073    1833 log.go:172] (0xc0009dc370) (0xc00074a000) Create stream\nI0205 13:56:25.124086    1833 log.go:172] (0xc0009dc370) (0xc00074a000) Stream added, broadcasting: 3\nI0205 13:56:25.125508    1833 log.go:172] (0xc0009dc370) Reply frame received for 3\nI0205 13:56:25.125526    1833 log.go:172] (0xc0009dc370) (0xc0009ca6e0) Create stream\nI0205 13:56:25.125532    1833 log.go:172] (0xc0009dc370) (0xc0009ca6e0) Stream added, broadcasting: 5\nI0205 13:56:25.127687    1833 log.go:172] (0xc0009dc370) Reply frame received for 5\nI0205 13:56:25.234989    1833 log.go:172] (0xc0009dc370) Data frame received for 3\nI0205 13:56:25.235409    1833 log.go:172] (0xc00074a000) (3) Data frame handling\nI0205 13:56:25.235475    1833 log.go:172] (0xc00074a000) (3) Data frame sent\nI0205 13:56:25.236236    1833 log.go:172] (0xc0009dc370) Data frame received for 5\nI0205 13:56:25.236299    1833 log.go:172] (0xc0009ca6e0) (5) Data frame handling\nI0205 13:56:25.236353    1833 log.go:172] (0xc0009ca6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 13:56:25.343620    1833 log.go:172] (0xc0009dc370) Data frame received for 1\nI0205 13:56:25.344145    1833 log.go:172] (0xc0009ca640) (1) Data frame handling\nI0205 13:56:25.344222    1833 log.go:172] (0xc0009ca640) (1) Data frame sent\nI0205 13:56:25.344651    1833 log.go:172] (0xc0009dc370) (0xc0009ca6e0) Stream removed, broadcasting: 5\nI0205 13:56:25.344790    1833 log.go:172] (0xc0009dc370) (0xc0009ca640) Stream removed, broadcasting: 1\nI0205 13:56:25.345093    1833 log.go:172] (0xc0009dc370) (0xc00074a000) Stream removed, broadcasting: 3\nI0205 13:56:25.345246    1833 log.go:172] (0xc0009dc370) Go away received\nI0205 13:56:25.346161    1833 log.go:172] (0xc0009dc370) (0xc0009ca640) Stream removed, broadcasting: 1\nI0205 13:56:25.346306    1833 log.go:172] (0xc0009dc370) (0xc00074a000) Stream removed, broadcasting: 3\nI0205 13:56:25.346412    1833 log.go:172] (0xc0009dc370) (0xc0009ca6e0) Stream removed, broadcasting: 5\n"
Feb  5 13:56:25.355: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 13:56:25.355: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 13:56:25.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:56:25.751: INFO: stderr: "I0205 13:56:25.528582    1853 log.go:172] (0xc000a0e370) (0xc0008c6640) Create stream\nI0205 13:56:25.528680    1853 log.go:172] (0xc000a0e370) (0xc0008c6640) Stream added, broadcasting: 1\nI0205 13:56:25.531496    1853 log.go:172] (0xc000a0e370) Reply frame received for 1\nI0205 13:56:25.531535    1853 log.go:172] (0xc000a0e370) (0xc000a1c000) Create stream\nI0205 13:56:25.531545    1853 log.go:172] (0xc000a0e370) (0xc000a1c000) Stream added, broadcasting: 3\nI0205 13:56:25.532705    1853 log.go:172] (0xc000a0e370) Reply frame received for 3\nI0205 13:56:25.532729    1853 log.go:172] (0xc000a0e370) (0xc000a1c0a0) Create stream\nI0205 13:56:25.532740    1853 log.go:172] (0xc000a0e370) (0xc000a1c0a0) Stream added, broadcasting: 5\nI0205 13:56:25.534874    1853 log.go:172] (0xc000a0e370) Reply frame received for 5\nI0205 13:56:25.619669    1853 log.go:172] (0xc000a0e370) Data frame received for 5\nI0205 13:56:25.619697    1853 log.go:172] (0xc000a1c0a0) (5) Data frame handling\nI0205 13:56:25.619713    1853 log.go:172] (0xc000a1c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 13:56:25.621160    1853 log.go:172] (0xc000a0e370) Data frame received for 5\nI0205 13:56:25.621217    1853 log.go:172] (0xc000a1c0a0) (5) Data frame handling\nI0205 13:56:25.621229    1853 log.go:172] (0xc000a1c0a0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0205 13:56:25.621243    1853 log.go:172] (0xc000a0e370) Data frame received for 3\nI0205 13:56:25.621251    1853 log.go:172] (0xc000a1c000) (3) Data frame handling\nI0205 13:56:25.621270    1853 log.go:172] (0xc000a1c000) (3) Data frame sent\nI0205 13:56:25.742789    1853 log.go:172] (0xc000a0e370) (0xc000a1c000) Stream removed, broadcasting: 3\nI0205 13:56:25.742976    1853 log.go:172] (0xc000a0e370) Data frame received for 1\nI0205 13:56:25.743009    1853 log.go:172] (0xc0008c6640) (1) Data frame handling\nI0205 13:56:25.743020    1853 log.go:172] (0xc000a0e370) (0xc000a1c0a0) Stream removed, broadcasting: 5\nI0205 13:56:25.743139    1853 log.go:172] (0xc0008c6640) (1) Data frame sent\nI0205 13:56:25.743165    1853 log.go:172] (0xc000a0e370) (0xc0008c6640) Stream removed, broadcasting: 1\nI0205 13:56:25.743177    1853 log.go:172] (0xc000a0e370) Go away received\nI0205 13:56:25.743509    1853 log.go:172] (0xc000a0e370) (0xc0008c6640) Stream removed, broadcasting: 1\nI0205 13:56:25.743529    1853 log.go:172] (0xc000a0e370) (0xc000a1c000) Stream removed, broadcasting: 3\nI0205 13:56:25.743548    1853 log.go:172] (0xc000a0e370) (0xc000a1c0a0) Stream removed, broadcasting: 5\n"
Feb  5 13:56:25.752: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 13:56:25.752: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 13:56:25.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:56:26.228: INFO: stderr: "I0205 13:56:25.919240    1873 log.go:172] (0xc0009cc0b0) (0xc000968140) Create stream\nI0205 13:56:25.919347    1873 log.go:172] (0xc0009cc0b0) (0xc000968140) Stream added, broadcasting: 1\nI0205 13:56:25.925639    1873 log.go:172] (0xc0009cc0b0) Reply frame received for 1\nI0205 13:56:25.925670    1873 log.go:172] (0xc0009cc0b0) (0xc00059bcc0) Create stream\nI0205 13:56:25.925680    1873 log.go:172] (0xc0009cc0b0) (0xc00059bcc0) Stream added, broadcasting: 3\nI0205 13:56:25.927283    1873 log.go:172] (0xc0009cc0b0) Reply frame received for 3\nI0205 13:56:25.927342    1873 log.go:172] (0xc0009cc0b0) (0xc0009681e0) Create stream\nI0205 13:56:25.927353    1873 log.go:172] (0xc0009cc0b0) (0xc0009681e0) Stream added, broadcasting: 5\nI0205 13:56:25.930079    1873 log.go:172] (0xc0009cc0b0) Reply frame received for 5\nI0205 13:56:26.035954    1873 log.go:172] (0xc0009cc0b0) Data frame received for 3\nI0205 13:56:26.036087    1873 log.go:172] (0xc00059bcc0) (3) Data frame handling\nI0205 13:56:26.036112    1873 log.go:172] (0xc00059bcc0) (3) Data frame sent\nI0205 13:56:26.036157    1873 log.go:172] (0xc0009cc0b0) Data frame received for 5\nI0205 13:56:26.036207    1873 log.go:172] (0xc0009681e0) (5) Data frame handling\nI0205 13:56:26.036219    1873 log.go:172] (0xc0009681e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0205 13:56:26.218991    1873 log.go:172] (0xc0009cc0b0) Data frame received for 1\nI0205 13:56:26.219123    1873 log.go:172] (0xc0009cc0b0) (0xc00059bcc0) Stream removed, broadcasting: 3\nI0205 13:56:26.219154    1873 log.go:172] (0xc000968140) (1) Data frame handling\nI0205 13:56:26.219165    1873 log.go:172] (0xc000968140) (1) Data frame sent\nI0205 13:56:26.219211    1873 log.go:172] (0xc0009cc0b0) (0xc0009681e0) Stream removed, broadcasting: 5\nI0205 13:56:26.219238    1873 log.go:172] (0xc0009cc0b0) (0xc000968140) Stream removed, broadcasting: 1\nI0205 13:56:26.219278    1873 log.go:172] (0xc0009cc0b0) Go away received\nI0205 13:56:26.219665    1873 log.go:172] (0xc0009cc0b0) (0xc000968140) Stream removed, broadcasting: 1\nI0205 13:56:26.219699    1873 log.go:172] (0xc0009cc0b0) (0xc00059bcc0) Stream removed, broadcasting: 3\nI0205 13:56:26.219711    1873 log.go:172] (0xc0009cc0b0) (0xc0009681e0) Stream removed, broadcasting: 5\n"
Feb  5 13:56:26.228: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 13:56:26.228: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 13:56:26.238: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 13:56:26.238: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 13:56:26.238: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  5 13:56:26.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 13:56:26.796: INFO: stderr: "I0205 13:56:26.380133    1892 log.go:172] (0xc000116fd0) (0xc000608960) Create stream\nI0205 13:56:26.380234    1892 log.go:172] (0xc000116fd0) (0xc000608960) Stream added, broadcasting: 1\nI0205 13:56:26.386513    1892 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0205 13:56:26.386541    1892 log.go:172] (0xc000116fd0) (0xc00083c000) Create stream\nI0205 13:56:26.386575    1892 log.go:172] (0xc000116fd0) (0xc00083c000) Stream added, broadcasting: 3\nI0205 13:56:26.388112    1892 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0205 13:56:26.388151    1892 log.go:172] (0xc000116fd0) (0xc000608a00) Create stream\nI0205 13:56:26.388165    1892 log.go:172] (0xc000116fd0) (0xc000608a00) Stream added, broadcasting: 5\nI0205 13:56:26.389382    1892 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0205 13:56:26.542050    1892 log.go:172] (0xc000116fd0) Data frame received for 3\nI0205 13:56:26.542200    1892 log.go:172] (0xc00083c000) (3) Data frame handling\nI0205 13:56:26.542279    1892 log.go:172] (0xc00083c000) (3) Data frame sent\nI0205 13:56:26.542314    1892 log.go:172] (0xc000116fd0) Data frame received for 5\nI0205 13:56:26.542336    1892 log.go:172] (0xc000608a00) (5) Data frame handling\nI0205 13:56:26.542361    1892 log.go:172] (0xc000608a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:56:26.787262    1892 log.go:172] (0xc000116fd0) (0xc00083c000) Stream removed, broadcasting: 3\nI0205 13:56:26.787575    1892 log.go:172] (0xc000116fd0) Data frame received for 1\nI0205 13:56:26.787591    1892 log.go:172] (0xc000608960) (1) Data frame handling\nI0205 13:56:26.787611    1892 log.go:172] (0xc000608960) (1) Data frame sent\nI0205 13:56:26.787617    1892 log.go:172] (0xc000116fd0) (0xc000608960) Stream removed, broadcasting: 1\nI0205 13:56:26.788000    1892 log.go:172] (0xc000116fd0) (0xc000608a00) Stream removed, broadcasting: 5\nI0205 13:56:26.788046    1892 log.go:172] (0xc000116fd0) (0xc000608960) Stream removed, broadcasting: 1\nI0205 13:56:26.788053    1892 log.go:172] (0xc000116fd0) (0xc00083c000) Stream removed, broadcasting: 3\nI0205 13:56:26.788057    1892 log.go:172] (0xc000116fd0) (0xc000608a00) Stream removed, broadcasting: 5\nI0205 13:56:26.788241    1892 log.go:172] (0xc000116fd0) Go away received\n"
Feb  5 13:56:26.797: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 13:56:26.797: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 13:56:26.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 13:56:27.290: INFO: stderr: "I0205 13:56:26.992289    1912 log.go:172] (0xc000948630) (0xc00097c8c0) Create stream\nI0205 13:56:26.992372    1912 log.go:172] (0xc000948630) (0xc00097c8c0) Stream added, broadcasting: 1\nI0205 13:56:27.002872    1912 log.go:172] (0xc000948630) Reply frame received for 1\nI0205 13:56:27.002950    1912 log.go:172] (0xc000948630) (0xc00097c000) Create stream\nI0205 13:56:27.002963    1912 log.go:172] (0xc000948630) (0xc00097c000) Stream added, broadcasting: 3\nI0205 13:56:27.004133    1912 log.go:172] (0xc000948630) Reply frame received for 3\nI0205 13:56:27.004160    1912 log.go:172] (0xc000948630) (0xc0004ecd20) Create stream\nI0205 13:56:27.004173    1912 log.go:172] (0xc000948630) (0xc0004ecd20) Stream added, broadcasting: 5\nI0205 13:56:27.005515    1912 log.go:172] (0xc000948630) Reply frame received for 5\nI0205 13:56:27.122680    1912 log.go:172] (0xc000948630) Data frame received for 5\nI0205 13:56:27.122828    1912 log.go:172] (0xc0004ecd20) (5) Data frame handling\nI0205 13:56:27.122904    1912 log.go:172] (0xc0004ecd20) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:56:27.196450    1912 log.go:172] (0xc000948630) Data frame received for 3\nI0205 13:56:27.196504    1912 log.go:172] (0xc00097c000) (3) Data frame handling\nI0205 13:56:27.196543    1912 log.go:172] (0xc00097c000) (3) Data frame sent\nI0205 13:56:27.283215    1912 log.go:172] (0xc000948630) (0xc00097c000) Stream removed, broadcasting: 3\nI0205 13:56:27.283300    1912 log.go:172] (0xc000948630) Data frame received for 1\nI0205 13:56:27.283311    1912 log.go:172] (0xc00097c8c0) (1) Data frame handling\nI0205 13:56:27.283320    1912 log.go:172] (0xc00097c8c0) (1) Data frame sent\nI0205 13:56:27.283327    1912 log.go:172] (0xc000948630) (0xc00097c8c0) Stream removed, broadcasting: 1\nI0205 13:56:27.283838    1912 log.go:172] (0xc000948630) (0xc0004ecd20) Stream removed, broadcasting: 5\nI0205 13:56:27.283924    1912 log.go:172] (0xc000948630) Go away received\nI0205 13:56:27.284070    1912 log.go:172] (0xc000948630) (0xc00097c8c0) Stream removed, broadcasting: 1\nI0205 13:56:27.284105    1912 log.go:172] (0xc000948630) (0xc00097c000) Stream removed, broadcasting: 3\nI0205 13:56:27.284116    1912 log.go:172] (0xc000948630) (0xc0004ecd20) Stream removed, broadcasting: 5\n"
Feb  5 13:56:27.290: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 13:56:27.290: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 13:56:27.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 13:56:27.744: INFO: stderr: "I0205 13:56:27.418294    1929 log.go:172] (0xc000a12160) (0xc0009f65a0) Create stream\nI0205 13:56:27.418405    1929 log.go:172] (0xc000a12160) (0xc0009f65a0) Stream added, broadcasting: 1\nI0205 13:56:27.423926    1929 log.go:172] (0xc000a12160) Reply frame received for 1\nI0205 13:56:27.423990    1929 log.go:172] (0xc000a12160) (0xc0006bc3c0) Create stream\nI0205 13:56:27.423997    1929 log.go:172] (0xc000a12160) (0xc0006bc3c0) Stream added, broadcasting: 3\nI0205 13:56:27.425553    1929 log.go:172] (0xc000a12160) Reply frame received for 3\nI0205 13:56:27.425583    1929 log.go:172] (0xc000a12160) (0xc000430000) Create stream\nI0205 13:56:27.425591    1929 log.go:172] (0xc000a12160) (0xc000430000) Stream added, broadcasting: 5\nI0205 13:56:27.427367    1929 log.go:172] (0xc000a12160) Reply frame received for 5\nI0205 13:56:27.520400    1929 log.go:172] (0xc000a12160) Data frame received for 5\nI0205 13:56:27.520738    1929 log.go:172] (0xc000430000) (5) Data frame handling\nI0205 13:56:27.520789    1929 log.go:172] (0xc000430000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 13:56:27.564084    1929 log.go:172] (0xc000a12160) Data frame received for 3\nI0205 13:56:27.564140    1929 log.go:172] (0xc0006bc3c0) (3) Data frame handling\nI0205 13:56:27.564159    1929 log.go:172] (0xc0006bc3c0) (3) Data frame sent\nI0205 13:56:27.732698    1929 log.go:172] (0xc000a12160) Data frame received for 1\nI0205 13:56:27.732834    1929 log.go:172] (0xc0009f65a0) (1) Data frame handling\nI0205 13:56:27.732853    1929 log.go:172] (0xc0009f65a0) (1) Data frame sent\nI0205 13:56:27.733238    1929 log.go:172] (0xc000a12160) (0xc0009f65a0) Stream removed, broadcasting: 1\nI0205 13:56:27.735236    1929 log.go:172] (0xc000a12160) (0xc0006bc3c0) Stream removed, broadcasting: 3\nI0205 13:56:27.735280    1929 log.go:172] (0xc000a12160) (0xc000430000) Stream removed, broadcasting: 5\nI0205 13:56:27.735328    1929 log.go:172] (0xc000a12160) (0xc0009f65a0) Stream removed, broadcasting: 1\nI0205 13:56:27.735346    1929 log.go:172] (0xc000a12160) (0xc0006bc3c0) Stream removed, broadcasting: 3\nI0205 13:56:27.735375    1929 log.go:172] (0xc000a12160) (0xc000430000) Stream removed, broadcasting: 5\n"
Feb  5 13:56:27.745: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 13:56:27.745: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 13:56:27.745: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 13:56:27.803: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  5 13:56:37.825: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 13:56:37.825: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 13:56:37.825: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 13:56:37.844: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:37.844: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:37.844: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:37.844: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:37.844: INFO: 
Feb  5 13:56:37.844: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:39.791: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:39.791: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:39.791: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:39.791: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:39.791: INFO: 
Feb  5 13:56:39.791: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:40.806: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:40.807: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:40.807: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:40.807: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:40.807: INFO: 
Feb  5 13:56:40.807: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:41.831: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:41.831: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:41.831: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:41.831: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:41.831: INFO: 
Feb  5 13:56:41.831: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:42.847: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:42.847: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:42.847: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:42.847: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:42.848: INFO: 
Feb  5 13:56:42.848: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:45.165: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:45.166: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:45.166: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:45.166: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:45.166: INFO: 
Feb  5 13:56:45.166: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:46.178: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:46.178: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:46.179: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:46.179: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:46.179: INFO: 
Feb  5 13:56:46.179: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  5 13:56:47.187: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  5 13:56:47.187: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:55:51 +0000 UTC  }]
Feb  5 13:56:47.187: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:47.188: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 13:56:14 +0000 UTC  }]
Feb  5 13:56:47.188: INFO: 
Feb  5 13:56:47.188: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9981
Feb  5 13:56:48.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:56:48.462: INFO: rc: 1
Feb  5 13:56:48.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002935a10 exit status 1   true [0xc001a9c090 0xc001a9c0a8 0xc001a9c0c0] [0xc001a9c090 0xc001a9c0a8 0xc001a9c0c0] [0xc001a9c0a0 0xc001a9c0b8] [0xba6c50 0xba6c50] 0xc0027b10e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  5 13:56:58.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:56:58.716: INFO: rc: 1
Feb  5 13:56:58.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003069170 exit status 1   true [0xc0009a62a8 0xc0009a62e8 0xc0009a6330] [0xc0009a62a8 0xc0009a62e8 0xc0009a6330] [0xc0009a62d0 0xc0009a6328] [0xba6c50 0xba6c50] 0xc0030fe720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:08.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:08.849: INFO: rc: 1
Feb  5 13:57:08.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002935b30 exit status 1   true [0xc001a9c0d0 0xc001a9c100 0xc001a9c130] [0xc001a9c0d0 0xc001a9c100 0xc001a9c130] [0xc001a9c0f0 0xc001a9c120] [0xba6c50 0xba6c50] 0xc00304a000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:18.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:18.957: INFO: rc: 1
Feb  5 13:57:18.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002935c20 exit status 1   true [0xc001a9c140 0xc001a9c178 0xc001a9c1a8] [0xc001a9c140 0xc001a9c178 0xc001a9c1a8] [0xc001a9c160 0xc001a9c198] [0xba6c50 0xba6c50] 0xc00304a4e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:28.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:29.171: INFO: rc: 1
Feb  5 13:57:29.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002935d10 exit status 1   true [0xc001a9c1c0 0xc001a9c1f8 0xc001a9c238] [0xc001a9c1c0 0xc001a9c1f8 0xc001a9c238] [0xc001a9c1e8 0xc001a9c218] [0xba6c50 0xba6c50] 0xc00304a8a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:39.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:39.344: INFO: rc: 1
Feb  5 13:57:39.344: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002847f80 exit status 1   true [0xc000b0d2c8 0xc000b0d338 0xc000b0d388] [0xc000b0d2c8 0xc000b0d338 0xc000b0d388] [0xc000b0d328 0xc000b0d348] [0xba6c50 0xba6c50] 0xc0021014a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:49.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:49.508: INFO: rc: 1
Feb  5 13:57:49.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003069230 exit status 1   true [0xc0009a6340 0xc0009a63a0 0xc0009a63b8] [0xc0009a6340 0xc0009a63a0 0xc0009a63b8] [0xc0009a6390 0xc0009a63b0] [0xba6c50 0xba6c50] 0xc0030fea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:57:59.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:57:59.641: INFO: rc: 1
Feb  5 13:57:59.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0014ec6f0 exit status 1   true [0xc000bcd3f0 0xc000bcd478 0xc000bcd548] [0xc000bcd3f0 0xc000bcd478 0xc000bcd548] [0xc000bcd468 0xc000bcd4f0] [0xba6c50 0xba6c50] 0xc002731aa0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:58:09.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:58:09.794: INFO: rc: 1
Feb  5 13:58:09.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c700c0 exit status 1   true [0xc001212008 0xc001212020 0xc001212038] [0xc001212008 0xc001212020 0xc001212038] [0xc001212018 0xc001212030] [0xba6c50 0xba6c50] 0xc00264b440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:58:19.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:58:19.958: INFO: rc: 1
Feb  5 13:58:19.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024b4090 exit status 1   true [0xc000680f70 0xc000681040 0xc0006811e8] [0xc000680f70 0xc000681040 0xc0006811e8] [0xc000681000 0xc000681150] [0xba6c50 0xba6c50] 0xc0027b0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:58:29.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:58:30.096: INFO: rc: 1
Feb  5 13:58:30.096: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024b4180 exit status 1   true [0xc000681490 0xc000681688 0xc0006818d0] [0xc000681490 0xc000681688 0xc0006818d0] [0xc000681668 0xc000681868] [0xba6c50 0xba6c50] 0xc0027b0d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:58:40.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:58:40.183: INFO: rc: 1
Feb  5 13:58:40.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360a80 exit status 1   true [0xc000186000 0xc000011f90 0xc001212050] [0xc000186000 0xc000011f90 0xc001212050] [0xc000011ea8 0xc001212048] [0xba6c50 0xba6c50] 0xc001fe24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:58:50.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:58:50.360: INFO: rc: 1
Feb  5 13:58:50.360: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360b70 exit status 1   true [0xc001212058 0xc001212070 0xc001212088] [0xc001212058 0xc001212070 0xc001212088] [0xc001212068 0xc001212080] [0xba6c50 0xba6c50] 0xc001fe28a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:00.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:00.527: INFO: rc: 1
Feb  5 13:59:00.528: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360c60 exit status 1   true [0xc001212090 0xc0012120a8 0xc0012120c0] [0xc001212090 0xc0012120a8 0xc0012120c0] [0xc0012120a0 0xc0012120b8] [0xba6c50 0xba6c50] 0xc001fe2c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:10.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:10.668: INFO: rc: 1
Feb  5 13:59:10.668: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e0f0 exit status 1   true [0xc001a9c000 0xc001a9c018 0xc001a9c030] [0xc001a9c000 0xc001a9c018 0xc001a9c030] [0xc001a9c010 0xc001a9c028] [0xba6c50 0xba6c50] 0xc00304a2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:20.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:20.812: INFO: rc: 1
Feb  5 13:59:20.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e1e0 exit status 1   true [0xc001a9c038 0xc001a9c050 0xc001a9c068] [0xc001a9c038 0xc001a9c050 0xc001a9c068] [0xc001a9c048 0xc001a9c060] [0xba6c50 0xba6c50] 0xc00304a6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:30.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:30.986: INFO: rc: 1
Feb  5 13:59:30.986: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e2a0 exit status 1   true [0xc001a9c070 0xc001a9c088 0xc001a9c0a0] [0xc001a9c070 0xc001a9c088 0xc001a9c0a0] [0xc001a9c080 0xc001a9c098] [0xba6c50 0xba6c50] 0xc00304aa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:40.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:41.146: INFO: rc: 1
Feb  5 13:59:41.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e390 exit status 1   true [0xc001a9c0a8 0xc001a9c0c0 0xc001a9c0f0] [0xc001a9c0a8 0xc001a9c0c0 0xc001a9c0f0] [0xc001a9c0b8 0xc001a9c0e0] [0xba6c50 0xba6c50] 0xc00304af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 13:59:51.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 13:59:51.313: INFO: rc: 1
Feb  5 13:59:51.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0024b4240 exit status 1   true [0xc000681948 0xc0006819d0 0xc000681a80] [0xc000681948 0xc0006819d0 0xc000681a80] [0xc0006819b8 0xc0006819f8] [0xba6c50 0xba6c50] 0xc0027b16e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:01.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:01.475: INFO: rc: 1
Feb  5 14:00:01.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360d80 exit status 1   true [0xc0012120c8 0xc0012120e0 0xc0012120f8] [0xc0012120c8 0xc0012120e0 0xc0012120f8] [0xc0012120d8 0xc0012120f0] [0xba6c50 0xba6c50] 0xc001fe3080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:11.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:11.671: INFO: rc: 1
Feb  5 14:00:11.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360000 exit status 1   true [0xc000bcc178 0xc000bcc2e0 0xc000bcc5b8] [0xc000bcc178 0xc000bcc2e0 0xc000bcc5b8] [0xc000bcc2b8 0xc000bcc4e0] [0xba6c50 0xba6c50] 0xc002730240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:21.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:21.834: INFO: rc: 1
Feb  5 14:00:21.835: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a150 exit status 1   true [0xc000011e78 0xc000186000 0xc001a9c010] [0xc000011e78 0xc000186000 0xc001a9c010] [0xc000011f90 0xc001a9c008] [0xba6c50 0xba6c50] 0xc00264b440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:31.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:31.973: INFO: rc: 1
Feb  5 14:00:31.973: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a210 exit status 1   true [0xc001a9c018 0xc001a9c030 0xc001a9c048] [0xc001a9c018 0xc001a9c030 0xc001a9c048] [0xc001a9c028 0xc001a9c040] [0xba6c50 0xba6c50] 0xc00304a2a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:41.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:42.155: INFO: rc: 1
Feb  5 14:00:42.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002360ab0 exit status 1   true [0xc000bcc638 0xc000bcc6e0 0xc000bcc768] [0xc000bcc638 0xc000bcc6e0 0xc000bcc768] [0xc000bcc6b0 0xc000bcc738] [0xba6c50 0xba6c50] 0xc002730540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:00:52.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:00:52.312: INFO: rc: 1
Feb  5 14:00:52.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e0c0 exit status 1   true [0xc000680f70 0xc000681040 0xc0006811e8] [0xc000680f70 0xc000681040 0xc0006811e8] [0xc000681000 0xc000681150] [0xba6c50 0xba6c50] 0xc0027b0600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:02.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:02.413: INFO: rc: 1
Feb  5 14:01:02.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a300 exit status 1   true [0xc001a9c050 0xc001a9c068 0xc001a9c080] [0xc001a9c050 0xc001a9c068 0xc001a9c080] [0xc001a9c060 0xc001a9c078] [0xba6c50 0xba6c50] 0xc00304a6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:12.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:12.595: INFO: rc: 1
Feb  5 14:01:12.595: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a3f0 exit status 1   true [0xc001a9c088 0xc001a9c0a0 0xc001a9c0b8] [0xc001a9c088 0xc001a9c0a0 0xc001a9c0b8] [0xc001a9c098 0xc001a9c0b0] [0xba6c50 0xba6c50] 0xc00304aa80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:22.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:22.719: INFO: rc: 1
Feb  5 14:01:22.720: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a4b0 exit status 1   true [0xc001a9c0c0 0xc001a9c0f0 0xc001a9c120] [0xc001a9c0c0 0xc001a9c0f0 0xc001a9c120] [0xc001a9c0e0 0xc001a9c110] [0xba6c50 0xba6c50] 0xc00304af60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:32.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:32.896: INFO: rc: 1
Feb  5 14:01:32.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00271a5a0 exit status 1   true [0xc001a9c130 0xc001a9c160 0xc001a9c198] [0xc001a9c130 0xc001a9c160 0xc001a9c198] [0xc001a9c150 0xc001a9c188] [0xba6c50 0xba6c50] 0xc00304b500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:42.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:43.014: INFO: rc: 1
Feb  5 14:01:43.014: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000e4e210 exit status 1   true [0xc000681490 0xc000681688 0xc0006818d0] [0xc000681490 0xc000681688 0xc0006818d0] [0xc000681668 0xc000681868] [0xba6c50 0xba6c50] 0xc0027b0d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Feb  5 14:01:53.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9981 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:01:53.152: INFO: rc: 1
Feb  5 14:01:53.153: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Feb  5 14:01:53.153: INFO: Scaling statefulset ss to 0
Feb  5 14:01:53.161: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  5 14:01:53.163: INFO: Deleting all statefulset in ns statefulset-9981
Feb  5 14:01:53.164: INFO: Scaling statefulset ss to 0
Feb  5 14:01:53.172: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:01:53.174: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:01:53.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9981" for this suite.
Feb  5 14:01:59.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:01:59.345: INFO: namespace statefulset-9981 deletion completed in 6.127153823s

• [SLOW TEST:367.679 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:01:59.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  5 14:01:59.457: INFO: Waiting up to 5m0s for pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796" in namespace "emptydir-2464" to be "success or failure"
Feb  5 14:01:59.465: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035778ms
Feb  5 14:02:01.471: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01457943s
Feb  5 14:02:03.478: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021050239s
Feb  5 14:02:05.488: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031338856s
Feb  5 14:02:07.494: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037345002s
STEP: Saw pod success
Feb  5 14:02:07.494: INFO: Pod "pod-cecdf27a-6ac3-40d2-948f-a04cf6521796" satisfied condition "success or failure"
Feb  5 14:02:07.498: INFO: Trying to get logs from node iruya-node pod pod-cecdf27a-6ac3-40d2-948f-a04cf6521796 container test-container: 
STEP: delete the pod
Feb  5 14:02:07.687: INFO: Waiting for pod pod-cecdf27a-6ac3-40d2-948f-a04cf6521796 to disappear
Feb  5 14:02:07.693: INFO: Pod pod-cecdf27a-6ac3-40d2-948f-a04cf6521796 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:02:07.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2464" for this suite.
Feb  5 14:02:13.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:02:13.890: INFO: namespace emptydir-2464 deletion completed in 6.189580168s

• [SLOW TEST:14.545 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:02:13.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  5 14:02:14.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  5 14:02:14.192: INFO: stderr: ""
Feb  5 14:02:14.192: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:02:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7988" for this suite.
Feb  5 14:02:20.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:02:20.372: INFO: namespace kubectl-7988 deletion completed in 6.173863696s

• [SLOW TEST:6.481 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:02:20.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-34341641-749a-4c10-989c-6216a68293c9
STEP: Creating a pod to test consume configMaps
Feb  5 14:02:20.490: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624" in namespace "projected-2382" to be "success or failure"
Feb  5 14:02:20.497: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624": Phase="Pending", Reason="", readiness=false. Elapsed: 7.700379ms
Feb  5 14:02:22.516: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025838615s
Feb  5 14:02:24.526: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036430857s
Feb  5 14:02:26.549: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059025146s
Feb  5 14:02:28.567: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077080773s
STEP: Saw pod success
Feb  5 14:02:28.567: INFO: Pod "pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624" satisfied condition "success or failure"
Feb  5 14:02:28.574: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 14:02:28.651: INFO: Waiting for pod pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624 to disappear
Feb  5 14:02:28.674: INFO: Pod pod-projected-configmaps-2b6f368c-c9c4-4c15-9b91-b1b965a4c624 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:02:28.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2382" for this suite.
Feb  5 14:02:34.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:02:34.854: INFO: namespace projected-2382 deletion completed in 6.174011449s

• [SLOW TEST:14.481 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:02:34.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:03:34.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-848" for this suite.
Feb  5 14:03:56.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:03:57.123: INFO: namespace container-probe-848 deletion completed in 22.147587226s

• [SLOW TEST:82.268 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:03:57.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  5 14:03:57.223: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  5 14:03:57.234: INFO: Waiting for terminating namespaces to be deleted...
Feb  5 14:03:57.237: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  5 14:03:57.247: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.247: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 14:03:57.247: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  5 14:03:57.247: INFO: 	Container weave ready: true, restart count 0
Feb  5 14:03:57.247: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 14:03:57.247: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  5 14:03:57.258: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  5 14:03:57.258: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  5 14:03:57.258: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container coredns ready: true, restart count 0
Feb  5 14:03:57.258: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container etcd ready: true, restart count 0
Feb  5 14:03:57.258: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container weave ready: true, restart count 0
Feb  5 14:03:57.258: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 14:03:57.258: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container coredns ready: true, restart count 0
Feb  5 14:03:57.258: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  5 14:03:57.258: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  5 14:03:57.258: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  5 14:03:57.367: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  5 14:03:57.367: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b.15f08693879c3cae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8919/filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b.15f08694a6a842f0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b.15f0869596dab081], Reason = [Created], Message = [Created container filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b.15f08695b45141fd], Reason = [Started], Message = [Started container filler-pod-34bb9e02-6f05-4738-9692-b75a5222680b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d.15f0869387a7a016], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8919/filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d.15f08694b80fbe76], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d.15f0869586031889], Reason = [Created], Message = [Created container filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d.15f08695afd384f9], Reason = [Started], Message = [Started container filler-pod-82fdd1ea-ae7c-44cc-98b7-1f034858789d]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f086965475fff2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:04:10.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8919" for this suite.
Feb  5 14:04:17.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:04:18.551: INFO: namespace sched-pred-8919 deletion completed in 7.990372845s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.427 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:04:18.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-4134
I0205 14:04:18.787532       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4134, replica count: 1
I0205 14:04:19.838386       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:20.838696       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:21.839034       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:22.839332       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:23.839645       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:24.839955       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:25.840274       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:26.840557       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:04:27.840810       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  5 14:04:27.979: INFO: Created: latency-svc-zcq4l
Feb  5 14:04:28.000: INFO: Got endpoints: latency-svc-zcq4l [59.138168ms]
Feb  5 14:04:28.053: INFO: Created: latency-svc-brqjx
Feb  5 14:04:28.153: INFO: Created: latency-svc-5sxzj
Feb  5 14:04:28.155: INFO: Got endpoints: latency-svc-brqjx [154.373411ms]
Feb  5 14:04:28.167: INFO: Got endpoints: latency-svc-5sxzj [166.827856ms]
Feb  5 14:04:28.194: INFO: Created: latency-svc-ssvwx
Feb  5 14:04:28.215: INFO: Got endpoints: latency-svc-ssvwx [214.511489ms]
Feb  5 14:04:28.240: INFO: Created: latency-svc-qtr4n
Feb  5 14:04:28.379: INFO: Got endpoints: latency-svc-qtr4n [379.504292ms]
Feb  5 14:04:28.383: INFO: Created: latency-svc-96lrh
Feb  5 14:04:28.403: INFO: Got endpoints: latency-svc-96lrh [402.689553ms]
Feb  5 14:04:28.449: INFO: Created: latency-svc-jfx8k
Feb  5 14:04:28.473: INFO: Got endpoints: latency-svc-jfx8k [472.126728ms]
Feb  5 14:04:28.658: INFO: Created: latency-svc-zkjj5
Feb  5 14:04:28.668: INFO: Got endpoints: latency-svc-zkjj5 [667.570933ms]
Feb  5 14:04:28.698: INFO: Created: latency-svc-8c4l9
Feb  5 14:04:28.714: INFO: Got endpoints: latency-svc-8c4l9 [713.656746ms]
Feb  5 14:04:28.747: INFO: Created: latency-svc-85p57
Feb  5 14:04:28.844: INFO: Got endpoints: latency-svc-85p57 [843.030811ms]
Feb  5 14:04:28.886: INFO: Created: latency-svc-2s75h
Feb  5 14:04:28.908: INFO: Got endpoints: latency-svc-2s75h [908.005377ms]
Feb  5 14:04:28.934: INFO: Created: latency-svc-rzwpq
Feb  5 14:04:28.940: INFO: Got endpoints: latency-svc-rzwpq [939.056208ms]
Feb  5 14:04:29.135: INFO: Created: latency-svc-bn4rj
Feb  5 14:04:29.151: INFO: Got endpoints: latency-svc-bn4rj [1.150084175s]
Feb  5 14:04:29.330: INFO: Created: latency-svc-v5kqx
Feb  5 14:04:29.339: INFO: Got endpoints: latency-svc-v5kqx [1.338068157s]
Feb  5 14:04:29.407: INFO: Created: latency-svc-h8rmg
Feb  5 14:04:29.409: INFO: Got endpoints: latency-svc-h8rmg [1.409258701s]
Feb  5 14:04:29.578: INFO: Created: latency-svc-59rzs
Feb  5 14:04:29.591: INFO: Got endpoints: latency-svc-59rzs [1.590492117s]
Feb  5 14:04:29.776: INFO: Created: latency-svc-l58lp
Feb  5 14:04:29.792: INFO: Got endpoints: latency-svc-l58lp [1.637484353s]
Feb  5 14:04:29.866: INFO: Created: latency-svc-q4psj
Feb  5 14:04:29.949: INFO: Got endpoints: latency-svc-q4psj [1.782057974s]
Feb  5 14:04:30.017: INFO: Created: latency-svc-gd4lb
Feb  5 14:04:30.037: INFO: Got endpoints: latency-svc-gd4lb [1.821278084s]
Feb  5 14:04:30.137: INFO: Created: latency-svc-7q9sx
Feb  5 14:04:30.367: INFO: Got endpoints: latency-svc-7q9sx [1.98729442s]
Feb  5 14:04:30.377: INFO: Created: latency-svc-qglxx
Feb  5 14:04:30.391: INFO: Got endpoints: latency-svc-qglxx [1.98793572s]
Feb  5 14:04:30.417: INFO: Created: latency-svc-c7l89
Feb  5 14:04:30.433: INFO: Got endpoints: latency-svc-c7l89 [1.960066989s]
Feb  5 14:04:30.554: INFO: Created: latency-svc-wrdmp
Feb  5 14:04:30.566: INFO: Got endpoints: latency-svc-wrdmp [1.897955397s]
Feb  5 14:04:30.629: INFO: Created: latency-svc-xdzx7
Feb  5 14:04:30.641: INFO: Got endpoints: latency-svc-xdzx7 [1.926927586s]
Feb  5 14:04:30.771: INFO: Created: latency-svc-hb25v
Feb  5 14:04:30.787: INFO: Got endpoints: latency-svc-hb25v [1.942904356s]
Feb  5 14:04:30.827: INFO: Created: latency-svc-rfqz2
Feb  5 14:04:30.830: INFO: Got endpoints: latency-svc-rfqz2 [1.921817898s]
Feb  5 14:04:30.952: INFO: Created: latency-svc-m478w
Feb  5 14:04:30.963: INFO: Got endpoints: latency-svc-m478w [2.02330385s]
Feb  5 14:04:31.008: INFO: Created: latency-svc-79tk5
Feb  5 14:04:31.017: INFO: Got endpoints: latency-svc-79tk5 [1.866120891s]
Feb  5 14:04:31.199: INFO: Created: latency-svc-s79kv
Feb  5 14:04:31.203: INFO: Got endpoints: latency-svc-s79kv [1.864044474s]
Feb  5 14:04:31.259: INFO: Created: latency-svc-8sxfc
Feb  5 14:04:31.269: INFO: Got endpoints: latency-svc-8sxfc [1.859401416s]
Feb  5 14:04:31.363: INFO: Created: latency-svc-xrnxh
Feb  5 14:04:31.370: INFO: Got endpoints: latency-svc-xrnxh [1.778920123s]
Feb  5 14:04:31.415: INFO: Created: latency-svc-rnznf
Feb  5 14:04:31.415: INFO: Got endpoints: latency-svc-rnznf [1.623006736s]
Feb  5 14:04:31.447: INFO: Created: latency-svc-ltkhr
Feb  5 14:04:31.526: INFO: Got endpoints: latency-svc-ltkhr [1.576074781s]
Feb  5 14:04:31.562: INFO: Created: latency-svc-vbwtm
Feb  5 14:04:31.576: INFO: Got endpoints: latency-svc-vbwtm [1.539267151s]
Feb  5 14:04:31.580: INFO: Created: latency-svc-ghktg
Feb  5 14:04:31.586: INFO: Got endpoints: latency-svc-ghktg [1.218735313s]
Feb  5 14:04:31.623: INFO: Created: latency-svc-rvhq8
Feb  5 14:04:31.724: INFO: Got endpoints: latency-svc-rvhq8 [1.332539456s]
Feb  5 14:04:31.749: INFO: Created: latency-svc-72djk
Feb  5 14:04:31.758: INFO: Got endpoints: latency-svc-72djk [1.325071123s]
Feb  5 14:04:31.798: INFO: Created: latency-svc-56flx
Feb  5 14:04:31.814: INFO: Got endpoints: latency-svc-56flx [1.24730723s]
Feb  5 14:04:31.919: INFO: Created: latency-svc-lq6fm
Feb  5 14:04:31.956: INFO: Got endpoints: latency-svc-lq6fm [1.314316941s]
Feb  5 14:04:31.993: INFO: Created: latency-svc-vgr6g
Feb  5 14:04:32.010: INFO: Got endpoints: latency-svc-vgr6g [1.223164697s]
Feb  5 14:04:32.127: INFO: Created: latency-svc-58ss2
Feb  5 14:04:32.143: INFO: Got endpoints: latency-svc-58ss2 [1.312274473s]
Feb  5 14:04:32.182: INFO: Created: latency-svc-4vhj6
Feb  5 14:04:32.198: INFO: Got endpoints: latency-svc-4vhj6 [1.234501482s]
Feb  5 14:04:32.294: INFO: Created: latency-svc-82t79
Feb  5 14:04:32.294: INFO: Got endpoints: latency-svc-82t79 [1.277508134s]
Feb  5 14:04:32.336: INFO: Created: latency-svc-jv2d8
Feb  5 14:04:32.497: INFO: Got endpoints: latency-svc-jv2d8 [1.293977009s]
Feb  5 14:04:32.519: INFO: Created: latency-svc-4f8v6
Feb  5 14:04:32.525: INFO: Got endpoints: latency-svc-4f8v6 [1.256038801s]
Feb  5 14:04:32.578: INFO: Created: latency-svc-nfczv
Feb  5 14:04:32.693: INFO: Got endpoints: latency-svc-nfczv [1.322726863s]
Feb  5 14:04:32.703: INFO: Created: latency-svc-h2znw
Feb  5 14:04:32.714: INFO: Got endpoints: latency-svc-h2znw [1.298313748s]
Feb  5 14:04:32.752: INFO: Created: latency-svc-jc4ph
Feb  5 14:04:32.767: INFO: Got endpoints: latency-svc-jc4ph [1.241322749s]
Feb  5 14:04:32.899: INFO: Created: latency-svc-xrgr7
Feb  5 14:04:32.910: INFO: Got endpoints: latency-svc-xrgr7 [1.333261747s]
Feb  5 14:04:32.957: INFO: Created: latency-svc-sxfgr
Feb  5 14:04:32.964: INFO: Got endpoints: latency-svc-sxfgr [1.377642449s]
Feb  5 14:04:33.054: INFO: Created: latency-svc-vsl8z
Feb  5 14:04:33.063: INFO: Got endpoints: latency-svc-vsl8z [1.338524916s]
Feb  5 14:04:33.094: INFO: Created: latency-svc-spqfk
Feb  5 14:04:33.098: INFO: Got endpoints: latency-svc-spqfk [1.339595633s]
Feb  5 14:04:33.233: INFO: Created: latency-svc-kbcp6
Feb  5 14:04:33.233: INFO: Got endpoints: latency-svc-kbcp6 [1.419180504s]
Feb  5 14:04:33.288: INFO: Created: latency-svc-nvdxl
Feb  5 14:04:33.305: INFO: Got endpoints: latency-svc-nvdxl [1.348988024s]
Feb  5 14:04:33.407: INFO: Created: latency-svc-77nc9
Feb  5 14:04:33.412: INFO: Got endpoints: latency-svc-77nc9 [1.402096655s]
Feb  5 14:04:33.481: INFO: Created: latency-svc-sqxgt
Feb  5 14:04:33.497: INFO: Got endpoints: latency-svc-sqxgt [1.354167627s]
Feb  5 14:04:33.570: INFO: Created: latency-svc-t27sg
Feb  5 14:04:33.575: INFO: Got endpoints: latency-svc-t27sg [1.376917785s]
Feb  5 14:04:33.623: INFO: Created: latency-svc-vzcjj
Feb  5 14:04:33.630: INFO: Got endpoints: latency-svc-vzcjj [1.335882065s]
Feb  5 14:04:33.669: INFO: Created: latency-svc-xrrvs
Feb  5 14:04:33.777: INFO: Got endpoints: latency-svc-xrrvs [1.279978361s]
Feb  5 14:04:33.823: INFO: Created: latency-svc-hkzgc
Feb  5 14:04:33.828: INFO: Got endpoints: latency-svc-hkzgc [1.303151164s]
Feb  5 14:04:34.001: INFO: Created: latency-svc-87zdh
Feb  5 14:04:34.014: INFO: Got endpoints: latency-svc-87zdh [1.3201638s]
Feb  5 14:04:34.212: INFO: Created: latency-svc-jrs2g
Feb  5 14:04:34.217: INFO: Got endpoints: latency-svc-jrs2g [1.502803714s]
Feb  5 14:04:34.269: INFO: Created: latency-svc-vt2pq
Feb  5 14:04:34.304: INFO: Got endpoints: latency-svc-vt2pq [1.536512889s]
Feb  5 14:04:34.309: INFO: Created: latency-svc-k2znq
Feb  5 14:04:34.507: INFO: Got endpoints: latency-svc-k2znq [1.597430387s]
Feb  5 14:04:34.510: INFO: Created: latency-svc-fl9w9
Feb  5 14:04:34.528: INFO: Got endpoints: latency-svc-fl9w9 [1.564060481s]
Feb  5 14:04:34.626: INFO: Created: latency-svc-m9lzg
Feb  5 14:04:34.675: INFO: Got endpoints: latency-svc-m9lzg [1.612289693s]
Feb  5 14:04:34.721: INFO: Created: latency-svc-t6rxx
Feb  5 14:04:34.791: INFO: Got endpoints: latency-svc-t6rxx [262.480904ms]
Feb  5 14:04:34.803: INFO: Created: latency-svc-5b6dj
Feb  5 14:04:34.818: INFO: Got endpoints: latency-svc-5b6dj [1.719759978s]
Feb  5 14:04:34.838: INFO: Created: latency-svc-vrkqr
Feb  5 14:04:34.854: INFO: Got endpoints: latency-svc-vrkqr [1.620372744s]
Feb  5 14:04:34.989: INFO: Created: latency-svc-ws852
Feb  5 14:04:34.994: INFO: Got endpoints: latency-svc-ws852 [1.688387055s]
Feb  5 14:04:35.024: INFO: Created: latency-svc-t2f7k
Feb  5 14:04:35.035: INFO: Got endpoints: latency-svc-t2f7k [1.622531684s]
Feb  5 14:04:35.075: INFO: Created: latency-svc-8lskw
Feb  5 14:04:35.075: INFO: Got endpoints: latency-svc-8lskw [1.577692403s]
Feb  5 14:04:35.250: INFO: Created: latency-svc-mqq5h
Feb  5 14:04:35.305: INFO: Got endpoints: latency-svc-mqq5h [1.729743018s]
Feb  5 14:04:35.365: INFO: Created: latency-svc-kq4rf
Feb  5 14:04:35.382: INFO: Got endpoints: latency-svc-kq4rf [1.751691634s]
Feb  5 14:04:35.449: INFO: Created: latency-svc-cglpm
Feb  5 14:04:35.457: INFO: Got endpoints: latency-svc-cglpm [1.67952766s]
Feb  5 14:04:35.531: INFO: Created: latency-svc-cszvd
Feb  5 14:04:35.622: INFO: Got endpoints: latency-svc-cszvd [1.793105285s]
Feb  5 14:04:35.630: INFO: Created: latency-svc-v5drs
Feb  5 14:04:35.645: INFO: Got endpoints: latency-svc-v5drs [1.630780441s]
Feb  5 14:04:35.687: INFO: Created: latency-svc-lsmf9
Feb  5 14:04:35.708: INFO: Got endpoints: latency-svc-lsmf9 [1.491461068s]
Feb  5 14:04:35.802: INFO: Created: latency-svc-5mq9k
Feb  5 14:04:35.817: INFO: Got endpoints: latency-svc-5mq9k [1.513306211s]
Feb  5 14:04:35.902: INFO: Created: latency-svc-l8mvv
Feb  5 14:04:35.962: INFO: Got endpoints: latency-svc-l8mvv [1.454216844s]
Feb  5 14:04:35.985: INFO: Created: latency-svc-bd9sm
Feb  5 14:04:35.999: INFO: Got endpoints: latency-svc-bd9sm [1.323347668s]
Feb  5 14:04:36.028: INFO: Created: latency-svc-8kcvn
Feb  5 14:04:36.128: INFO: Got endpoints: latency-svc-8kcvn [1.33695157s]
Feb  5 14:04:36.137: INFO: Created: latency-svc-mxxvq
Feb  5 14:04:36.142: INFO: Got endpoints: latency-svc-mxxvq [1.32455147s]
Feb  5 14:04:36.184: INFO: Created: latency-svc-tz9kz
Feb  5 14:04:36.196: INFO: Got endpoints: latency-svc-tz9kz [1.341689684s]
Feb  5 14:04:36.236: INFO: Created: latency-svc-ppx5d
Feb  5 14:04:36.351: INFO: Got endpoints: latency-svc-ppx5d [1.357214672s]
Feb  5 14:04:36.362: INFO: Created: latency-svc-bdgjt
Feb  5 14:04:36.378: INFO: Got endpoints: latency-svc-bdgjt [1.343189869s]
Feb  5 14:04:36.413: INFO: Created: latency-svc-mqg9z
Feb  5 14:04:36.452: INFO: Got endpoints: latency-svc-mqg9z [1.37671347s]
Feb  5 14:04:36.458: INFO: Created: latency-svc-57dpm
Feb  5 14:04:36.590: INFO: Got endpoints: latency-svc-57dpm [1.285420128s]
Feb  5 14:04:36.611: INFO: Created: latency-svc-vnmjv
Feb  5 14:04:36.611: INFO: Got endpoints: latency-svc-vnmjv [1.228651711s]
Feb  5 14:04:36.683: INFO: Created: latency-svc-h7t6b
Feb  5 14:04:36.814: INFO: Got endpoints: latency-svc-h7t6b [1.356286823s]
Feb  5 14:04:36.822: INFO: Created: latency-svc-2s2sm
Feb  5 14:04:36.828: INFO: Got endpoints: latency-svc-2s2sm [1.206245922s]
Feb  5 14:04:36.878: INFO: Created: latency-svc-rzxp2
Feb  5 14:04:36.893: INFO: Got endpoints: latency-svc-rzxp2 [1.24785372s]
Feb  5 14:04:37.001: INFO: Created: latency-svc-d4q7j
Feb  5 14:04:37.025: INFO: Got endpoints: latency-svc-d4q7j [1.316697581s]
Feb  5 14:04:37.031: INFO: Created: latency-svc-hv2rc
Feb  5 14:04:37.044: INFO: Got endpoints: latency-svc-hv2rc [1.227020601s]
Feb  5 14:04:37.076: INFO: Created: latency-svc-wjzpb
Feb  5 14:04:37.083: INFO: Got endpoints: latency-svc-wjzpb [1.120972094s]
Feb  5 14:04:37.204: INFO: Created: latency-svc-vsfd5
Feb  5 14:04:37.214: INFO: Got endpoints: latency-svc-vsfd5 [1.214412692s]
Feb  5 14:04:37.270: INFO: Created: latency-svc-kmz2l
Feb  5 14:04:37.286: INFO: Got endpoints: latency-svc-kmz2l [1.157659478s]
Feb  5 14:04:37.409: INFO: Created: latency-svc-xtlkg
Feb  5 14:04:37.420: INFO: Got endpoints: latency-svc-xtlkg [1.277228871s]
Feb  5 14:04:37.446: INFO: Created: latency-svc-w52sz
Feb  5 14:04:37.456: INFO: Got endpoints: latency-svc-w52sz [1.260170033s]
Feb  5 14:04:37.506: INFO: Created: latency-svc-8p8ln
Feb  5 14:04:37.642: INFO: Got endpoints: latency-svc-8p8ln [1.291199559s]
Feb  5 14:04:37.662: INFO: Created: latency-svc-n2bbg
Feb  5 14:04:37.665: INFO: Got endpoints: latency-svc-n2bbg [1.286149971s]
Feb  5 14:04:37.688: INFO: Created: latency-svc-x9ctm
Feb  5 14:04:37.693: INFO: Got endpoints: latency-svc-x9ctm [1.241381284s]
Feb  5 14:04:37.747: INFO: Created: latency-svc-7hmqb
Feb  5 14:04:37.870: INFO: Got endpoints: latency-svc-7hmqb [1.279951304s]
Feb  5 14:04:37.900: INFO: Created: latency-svc-w4qq6
Feb  5 14:04:37.912: INFO: Got endpoints: latency-svc-w4qq6 [1.300824253s]
Feb  5 14:04:38.052: INFO: Created: latency-svc-psfqz
Feb  5 14:04:38.057: INFO: Got endpoints: latency-svc-psfqz [1.243353456s]
Feb  5 14:04:38.120: INFO: Created: latency-svc-9cj6f
Feb  5 14:04:38.129: INFO: Got endpoints: latency-svc-9cj6f [1.300443516s]
Feb  5 14:04:38.275: INFO: Created: latency-svc-b4vwl
Feb  5 14:04:38.302: INFO: Got endpoints: latency-svc-b4vwl [1.408627851s]
Feb  5 14:04:38.358: INFO: Created: latency-svc-65bth
Feb  5 14:04:38.366: INFO: Got endpoints: latency-svc-65bth [1.34117814s]
Feb  5 14:04:38.472: INFO: Created: latency-svc-sblfs
Feb  5 14:04:38.508: INFO: Created: latency-svc-s4zcm
Feb  5 14:04:38.509: INFO: Got endpoints: latency-svc-sblfs [1.464217981s]
Feb  5 14:04:38.525: INFO: Got endpoints: latency-svc-s4zcm [1.441751868s]
Feb  5 14:04:38.756: INFO: Created: latency-svc-qd9jj
Feb  5 14:04:38.756: INFO: Got endpoints: latency-svc-qd9jj [1.542058182s]
Feb  5 14:04:38.796: INFO: Created: latency-svc-cnv2l
Feb  5 14:04:38.953: INFO: Created: latency-svc-nn76c
Feb  5 14:04:38.954: INFO: Got endpoints: latency-svc-cnv2l [1.667533881s]
Feb  5 14:04:38.971: INFO: Got endpoints: latency-svc-nn76c [1.551304568s]
Feb  5 14:04:39.007: INFO: Created: latency-svc-z4mgc
Feb  5 14:04:39.041: INFO: Got endpoints: latency-svc-z4mgc [1.584957675s]
Feb  5 14:04:39.043: INFO: Created: latency-svc-gsnmt
Feb  5 14:04:39.050: INFO: Got endpoints: latency-svc-gsnmt [1.407182554s]
Feb  5 14:04:39.252: INFO: Created: latency-svc-bjs5j
Feb  5 14:04:39.319: INFO: Got endpoints: latency-svc-bjs5j [1.653886689s]
Feb  5 14:04:39.339: INFO: Created: latency-svc-j9z28
Feb  5 14:04:39.444: INFO: Got endpoints: latency-svc-j9z28 [1.750674152s]
Feb  5 14:04:39.464: INFO: Created: latency-svc-sktjj
Feb  5 14:04:39.476: INFO: Got endpoints: latency-svc-sktjj [1.604845936s]
Feb  5 14:04:39.515: INFO: Created: latency-svc-6cxv5
Feb  5 14:04:39.525: INFO: Got endpoints: latency-svc-6cxv5 [1.612562224s]
Feb  5 14:04:39.673: INFO: Created: latency-svc-bglxf
Feb  5 14:04:39.685: INFO: Got endpoints: latency-svc-bglxf [1.627514783s]
Feb  5 14:04:39.719: INFO: Created: latency-svc-cg99j
Feb  5 14:04:39.726: INFO: Got endpoints: latency-svc-cg99j [1.596878745s]
Feb  5 14:04:39.764: INFO: Created: latency-svc-dmbsw
Feb  5 14:04:39.775: INFO: Got endpoints: latency-svc-dmbsw [1.472531757s]
Feb  5 14:04:39.883: INFO: Created: latency-svc-z22jm
Feb  5 14:04:39.900: INFO: Got endpoints: latency-svc-z22jm [1.533063408s]
Feb  5 14:04:39.937: INFO: Created: latency-svc-zrgb4
Feb  5 14:04:39.951: INFO: Got endpoints: latency-svc-zrgb4 [1.441365551s]
Feb  5 14:04:40.098: INFO: Created: latency-svc-2kgns
Feb  5 14:04:40.142: INFO: Got endpoints: latency-svc-2kgns [1.617231531s]
Feb  5 14:04:40.158: INFO: Created: latency-svc-gfnt9
Feb  5 14:04:40.160: INFO: Got endpoints: latency-svc-gfnt9 [1.403464557s]
Feb  5 14:04:40.188: INFO: Created: latency-svc-zq6dl
Feb  5 14:04:40.289: INFO: Got endpoints: latency-svc-zq6dl [1.334857905s]
Feb  5 14:04:40.319: INFO: Created: latency-svc-8gcb8
Feb  5 14:04:40.332: INFO: Got endpoints: latency-svc-8gcb8 [1.360427792s]
Feb  5 14:04:40.376: INFO: Created: latency-svc-znlpf
Feb  5 14:04:40.388: INFO: Got endpoints: latency-svc-znlpf [1.347034991s]
Feb  5 14:04:40.633: INFO: Created: latency-svc-p5gtr
Feb  5 14:04:40.678: INFO: Got endpoints: latency-svc-p5gtr [1.628702061s]
Feb  5 14:04:40.814: INFO: Created: latency-svc-fbvbq
Feb  5 14:04:40.839: INFO: Got endpoints: latency-svc-fbvbq [1.519763505s]
Feb  5 14:04:40.977: INFO: Created: latency-svc-776gs
Feb  5 14:04:41.001: INFO: Got endpoints: latency-svc-776gs [1.556005661s]
Feb  5 14:04:41.063: INFO: Created: latency-svc-bb6hf
Feb  5 14:04:41.144: INFO: Got endpoints: latency-svc-bb6hf [1.668302475s]
Feb  5 14:04:41.235: INFO: Created: latency-svc-lsnfn
Feb  5 14:04:41.367: INFO: Got endpoints: latency-svc-lsnfn [1.842712256s]
Feb  5 14:04:41.380: INFO: Created: latency-svc-gwt8j
Feb  5 14:04:41.401: INFO: Got endpoints: latency-svc-gwt8j [1.716018092s]
Feb  5 14:04:41.551: INFO: Created: latency-svc-jw254
Feb  5 14:04:41.559: INFO: Got endpoints: latency-svc-jw254 [1.833425302s]
Feb  5 14:04:41.736: INFO: Created: latency-svc-j2qfp
Feb  5 14:04:41.779: INFO: Created: latency-svc-djtmw
Feb  5 14:04:41.787: INFO: Got endpoints: latency-svc-j2qfp [2.011808135s]
Feb  5 14:04:41.791: INFO: Got endpoints: latency-svc-djtmw [1.891213826s]
Feb  5 14:04:41.839: INFO: Created: latency-svc-6phd8
Feb  5 14:04:41.959: INFO: Got endpoints: latency-svc-6phd8 [2.007949054s]
Feb  5 14:04:41.994: INFO: Created: latency-svc-gx7vz
Feb  5 14:04:41.997: INFO: Got endpoints: latency-svc-gx7vz [1.854380758s]
Feb  5 14:04:42.048: INFO: Created: latency-svc-rxb7x
Feb  5 14:04:42.051: INFO: Got endpoints: latency-svc-rxb7x [1.891432566s]
Feb  5 14:04:42.228: INFO: Created: latency-svc-99l4g
Feb  5 14:04:42.241: INFO: Got endpoints: latency-svc-99l4g [1.952071499s]
Feb  5 14:04:42.287: INFO: Created: latency-svc-sjvtc
Feb  5 14:04:42.293: INFO: Got endpoints: latency-svc-sjvtc [1.960633459s]
Feb  5 14:04:42.456: INFO: Created: latency-svc-2dddl
Feb  5 14:04:42.459: INFO: Got endpoints: latency-svc-2dddl [2.070588635s]
Feb  5 14:04:42.518: INFO: Created: latency-svc-zcqmx
Feb  5 14:04:42.665: INFO: Got endpoints: latency-svc-zcqmx [1.986169827s]
Feb  5 14:04:42.674: INFO: Created: latency-svc-8ds87
Feb  5 14:04:42.677: INFO: Got endpoints: latency-svc-8ds87 [1.83751195s]
Feb  5 14:04:42.738: INFO: Created: latency-svc-92z54
Feb  5 14:04:42.866: INFO: Got endpoints: latency-svc-92z54 [1.865324626s]
Feb  5 14:04:42.875: INFO: Created: latency-svc-jl6zv
Feb  5 14:04:42.887: INFO: Got endpoints: latency-svc-jl6zv [1.742224404s]
Feb  5 14:04:42.953: INFO: Created: latency-svc-fbgtb
Feb  5 14:04:43.030: INFO: Got endpoints: latency-svc-fbgtb [1.662333091s]
Feb  5 14:04:43.050: INFO: Created: latency-svc-m42fw
Feb  5 14:04:43.053: INFO: Got endpoints: latency-svc-m42fw [1.651936209s]
Feb  5 14:04:43.094: INFO: Created: latency-svc-8688c
Feb  5 14:04:43.115: INFO: Got endpoints: latency-svc-8688c [1.555365074s]
Feb  5 14:04:43.229: INFO: Created: latency-svc-zd9dx
Feb  5 14:04:43.237: INFO: Got endpoints: latency-svc-zd9dx [1.445434436s]
Feb  5 14:04:43.267: INFO: Created: latency-svc-7cj2q
Feb  5 14:04:43.280: INFO: Got endpoints: latency-svc-7cj2q [1.493822928s]
Feb  5 14:04:43.312: INFO: Created: latency-svc-2rpts
Feb  5 14:04:43.323: INFO: Got endpoints: latency-svc-2rpts [1.363916887s]
Feb  5 14:04:43.454: INFO: Created: latency-svc-gckz5
Feb  5 14:04:43.465: INFO: Got endpoints: latency-svc-gckz5 [1.467214543s]
Feb  5 14:04:43.529: INFO: Created: latency-svc-c9b5w
Feb  5 14:04:43.616: INFO: Got endpoints: latency-svc-c9b5w [1.563970379s]
Feb  5 14:04:43.627: INFO: Created: latency-svc-bhr5v
Feb  5 14:04:43.630: INFO: Got endpoints: latency-svc-bhr5v [1.389061299s]
Feb  5 14:04:43.682: INFO: Created: latency-svc-kvfxw
Feb  5 14:04:43.799: INFO: Got endpoints: latency-svc-kvfxw [1.505919434s]
Feb  5 14:04:43.799: INFO: Created: latency-svc-zggm4
Feb  5 14:04:43.815: INFO: Got endpoints: latency-svc-zggm4 [1.355894839s]
Feb  5 14:04:43.892: INFO: Created: latency-svc-95ksw
Feb  5 14:04:43.894: INFO: Got endpoints: latency-svc-95ksw [1.228966655s]
Feb  5 14:04:44.023: INFO: Created: latency-svc-xt8zz
Feb  5 14:04:44.062: INFO: Got endpoints: latency-svc-xt8zz [1.385261052s]
Feb  5 14:04:44.072: INFO: Created: latency-svc-5qltl
Feb  5 14:04:44.092: INFO: Got endpoints: latency-svc-5qltl [1.225179836s]
Feb  5 14:04:44.198: INFO: Created: latency-svc-2rpt8
Feb  5 14:04:44.232: INFO: Got endpoints: latency-svc-2rpt8 [1.345273901s]
Feb  5 14:04:44.241: INFO: Created: latency-svc-zdpb7
Feb  5 14:04:44.266: INFO: Got endpoints: latency-svc-zdpb7 [1.235830956s]
Feb  5 14:04:44.276: INFO: Created: latency-svc-752pz
Feb  5 14:04:44.362: INFO: Got endpoints: latency-svc-752pz [1.308291217s]
Feb  5 14:04:44.382: INFO: Created: latency-svc-9h54h
Feb  5 14:04:44.387: INFO: Got endpoints: latency-svc-9h54h [1.272436169s]
Feb  5 14:04:44.424: INFO: Created: latency-svc-ncfk2
Feb  5 14:04:44.429: INFO: Got endpoints: latency-svc-ncfk2 [1.191478509s]
Feb  5 14:04:44.466: INFO: Created: latency-svc-7vnhv
Feb  5 14:04:44.559: INFO: Got endpoints: latency-svc-7vnhv [1.278741131s]
Feb  5 14:04:44.575: INFO: Created: latency-svc-shrv6
Feb  5 14:04:44.635: INFO: Got endpoints: latency-svc-shrv6 [1.312089931s]
Feb  5 14:04:44.726: INFO: Created: latency-svc-8xls2
Feb  5 14:04:44.736: INFO: Got endpoints: latency-svc-8xls2 [1.271135563s]
Feb  5 14:04:44.779: INFO: Created: latency-svc-vghlf
Feb  5 14:04:44.787: INFO: Got endpoints: latency-svc-vghlf [1.171352924s]
Feb  5 14:04:44.820: INFO: Created: latency-svc-86p5f
Feb  5 14:04:44.914: INFO: Got endpoints: latency-svc-86p5f [1.284092974s]
Feb  5 14:04:44.920: INFO: Created: latency-svc-rmjvx
Feb  5 14:04:44.938: INFO: Got endpoints: latency-svc-rmjvx [1.138770205s]
Feb  5 14:04:44.983: INFO: Created: latency-svc-t5c5w
Feb  5 14:04:45.071: INFO: Got endpoints: latency-svc-t5c5w [1.255909262s]
Feb  5 14:04:45.073: INFO: Created: latency-svc-7d5j9
Feb  5 14:04:45.084: INFO: Got endpoints: latency-svc-7d5j9 [1.18958324s]
Feb  5 14:04:45.117: INFO: Created: latency-svc-sjg8l
Feb  5 14:04:45.134: INFO: Got endpoints: latency-svc-sjg8l [1.07201198s]
Feb  5 14:04:45.278: INFO: Created: latency-svc-k2fxj
Feb  5 14:04:45.287: INFO: Got endpoints: latency-svc-k2fxj [1.195068726s]
Feb  5 14:04:45.344: INFO: Created: latency-svc-h5bmf
Feb  5 14:04:45.349: INFO: Got endpoints: latency-svc-h5bmf [1.115747507s]
Feb  5 14:04:45.511: INFO: Created: latency-svc-4ztdn
Feb  5 14:04:45.532: INFO: Got endpoints: latency-svc-4ztdn [1.265340844s]
Feb  5 14:04:45.578: INFO: Created: latency-svc-5t6zb
Feb  5 14:04:45.593: INFO: Got endpoints: latency-svc-5t6zb [1.231371854s]
Feb  5 14:04:45.738: INFO: Created: latency-svc-j4n99
Feb  5 14:04:45.749: INFO: Got endpoints: latency-svc-j4n99 [1.361575251s]
Feb  5 14:04:45.952: INFO: Created: latency-svc-wdvzz
Feb  5 14:04:45.964: INFO: Got endpoints: latency-svc-wdvzz [1.534989509s]
Feb  5 14:04:46.014: INFO: Created: latency-svc-stw9p
Feb  5 14:04:46.014: INFO: Got endpoints: latency-svc-stw9p [1.454676555s]
Feb  5 14:04:46.118: INFO: Created: latency-svc-c8l4v
Feb  5 14:04:46.122: INFO: Got endpoints: latency-svc-c8l4v [1.48591199s]
Feb  5 14:04:46.166: INFO: Created: latency-svc-849zq
Feb  5 14:04:46.175: INFO: Got endpoints: latency-svc-849zq [1.438718557s]
Feb  5 14:04:46.202: INFO: Created: latency-svc-crmsb
Feb  5 14:04:46.205: INFO: Got endpoints: latency-svc-crmsb [1.417422583s]
Feb  5 14:04:46.325: INFO: Created: latency-svc-kzb5h
Feb  5 14:04:46.331: INFO: Got endpoints: latency-svc-kzb5h [1.416070527s]
Feb  5 14:04:46.380: INFO: Created: latency-svc-v5fbd
Feb  5 14:04:46.385: INFO: Got endpoints: latency-svc-v5fbd [1.446511024s]
Feb  5 14:04:46.484: INFO: Created: latency-svc-2n2gb
Feb  5 14:04:46.511: INFO: Created: latency-svc-fthpn
Feb  5 14:04:46.513: INFO: Got endpoints: latency-svc-2n2gb [1.442092108s]
Feb  5 14:04:46.536: INFO: Got endpoints: latency-svc-fthpn [1.451309182s]
Feb  5 14:04:46.625: INFO: Created: latency-svc-kzqx6
Feb  5 14:04:46.630: INFO: Got endpoints: latency-svc-kzqx6 [1.495802748s]
Feb  5 14:04:46.672: INFO: Created: latency-svc-c7qnm
Feb  5 14:04:46.688: INFO: Got endpoints: latency-svc-c7qnm [1.401088653s]
Feb  5 14:04:46.801: INFO: Created: latency-svc-tv2xv
Feb  5 14:04:46.804: INFO: Got endpoints: latency-svc-tv2xv [1.455186356s]
Feb  5 14:04:46.858: INFO: Created: latency-svc-sbjnc
Feb  5 14:04:46.865: INFO: Got endpoints: latency-svc-sbjnc [1.333668269s]
Feb  5 14:04:46.891: INFO: Created: latency-svc-n2p4g
Feb  5 14:04:46.971: INFO: Got endpoints: latency-svc-n2p4g [1.377764281s]
Feb  5 14:04:46.997: INFO: Created: latency-svc-6c9p4
Feb  5 14:04:46.997: INFO: Got endpoints: latency-svc-6c9p4 [1.247797138s]
Feb  5 14:04:47.023: INFO: Created: latency-svc-bdwzh
Feb  5 14:04:47.031: INFO: Got endpoints: latency-svc-bdwzh [1.067034945s]
Feb  5 14:04:47.129: INFO: Created: latency-svc-ksql6
Feb  5 14:04:47.136: INFO: Got endpoints: latency-svc-ksql6 [1.121339839s]
Feb  5 14:04:47.184: INFO: Created: latency-svc-6nwtz
Feb  5 14:04:47.197: INFO: Got endpoints: latency-svc-6nwtz [1.075473554s]
Feb  5 14:04:47.395: INFO: Created: latency-svc-frzqx
Feb  5 14:04:47.407: INFO: Got endpoints: latency-svc-frzqx [1.231982175s]
Feb  5 14:04:47.454: INFO: Created: latency-svc-7b4bq
Feb  5 14:04:47.462: INFO: Got endpoints: latency-svc-7b4bq [1.257509365s]
Feb  5 14:04:47.463: INFO: Latencies: [154.373411ms 166.827856ms 214.511489ms 262.480904ms 379.504292ms 402.689553ms 472.126728ms 667.570933ms 713.656746ms 843.030811ms 908.005377ms 939.056208ms 1.067034945s 1.07201198s 1.075473554s 1.115747507s 1.120972094s 1.121339839s 1.138770205s 1.150084175s 1.157659478s 1.171352924s 1.18958324s 1.191478509s 1.195068726s 1.206245922s 1.214412692s 1.218735313s 1.223164697s 1.225179836s 1.227020601s 1.228651711s 1.228966655s 1.231371854s 1.231982175s 1.234501482s 1.235830956s 1.241322749s 1.241381284s 1.243353456s 1.24730723s 1.247797138s 1.24785372s 1.255909262s 1.256038801s 1.257509365s 1.260170033s 1.265340844s 1.271135563s 1.272436169s 1.277228871s 1.277508134s 1.278741131s 1.279951304s 1.279978361s 1.284092974s 1.285420128s 1.286149971s 1.291199559s 1.293977009s 1.298313748s 1.300443516s 1.300824253s 1.303151164s 1.308291217s 1.312089931s 1.312274473s 1.314316941s 1.316697581s 1.3201638s 1.322726863s 1.323347668s 1.32455147s 1.325071123s 1.332539456s 1.333261747s 1.333668269s 1.334857905s 1.335882065s 1.33695157s 1.338068157s 1.338524916s 1.339595633s 1.34117814s 1.341689684s 1.343189869s 1.345273901s 1.347034991s 1.348988024s 1.354167627s 1.355894839s 1.356286823s 1.357214672s 1.360427792s 1.361575251s 1.363916887s 1.37671347s 1.376917785s 1.377642449s 1.377764281s 1.385261052s 1.389061299s 1.401088653s 1.402096655s 1.403464557s 1.407182554s 1.408627851s 1.409258701s 1.416070527s 1.417422583s 1.419180504s 1.438718557s 1.441365551s 1.441751868s 1.442092108s 1.445434436s 1.446511024s 1.451309182s 1.454216844s 1.454676555s 1.455186356s 1.464217981s 1.467214543s 1.472531757s 1.48591199s 1.491461068s 1.493822928s 1.495802748s 1.502803714s 1.505919434s 1.513306211s 1.519763505s 1.533063408s 1.534989509s 1.536512889s 1.539267151s 1.542058182s 1.551304568s 1.555365074s 1.556005661s 1.563970379s 1.564060481s 1.576074781s 1.577692403s 1.584957675s 1.590492117s 1.596878745s 1.597430387s 1.604845936s 1.612289693s 1.612562224s 1.617231531s 1.620372744s 1.622531684s 1.623006736s 1.627514783s 1.628702061s 1.630780441s 1.637484353s 1.651936209s 1.653886689s 1.662333091s 1.667533881s 1.668302475s 1.67952766s 1.688387055s 1.716018092s 1.719759978s 1.729743018s 1.742224404s 1.750674152s 1.751691634s 1.778920123s 1.782057974s 1.793105285s 1.821278084s 1.833425302s 1.83751195s 1.842712256s 1.854380758s 1.859401416s 1.864044474s 1.865324626s 1.866120891s 1.891213826s 1.891432566s 1.897955397s 1.921817898s 1.926927586s 1.942904356s 1.952071499s 1.960066989s 1.960633459s 1.986169827s 1.98729442s 1.98793572s 2.007949054s 2.011808135s 2.02330385s 2.070588635s]
Feb  5 14:04:47.463: INFO: 50 %ile: 1.385261052s
Feb  5 14:04:47.463: INFO: 90 %ile: 1.859401416s
Feb  5 14:04:47.463: INFO: 99 %ile: 2.02330385s
Feb  5 14:04:47.463: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:04:47.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-4134" for this suite.
Feb  5 14:05:23.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:05:23.700: INFO: namespace svc-latency-4134 deletion completed in 36.220207744s

• [SLOW TEST:65.149 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:05:23.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 14:05:23.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5" in namespace "downward-api-5280" to be "success or failure"
Feb  5 14:05:23.936: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.344106ms
Feb  5 14:05:25.950: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061827778s
Feb  5 14:05:27.958: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07001465s
Feb  5 14:05:30.859: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.971111175s
Feb  5 14:05:33.353: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.464633198s
Feb  5 14:05:35.361: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472966771s
Feb  5 14:05:37.374: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.486166889s
STEP: Saw pod success
Feb  5 14:05:37.374: INFO: Pod "downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5" satisfied condition "success or failure"
Feb  5 14:05:37.379: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5 container client-container: 
STEP: delete the pod
Feb  5 14:05:37.573: INFO: Waiting for pod downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5 to disappear
Feb  5 14:05:37.597: INFO: Pod downwardapi-volume-c4167f7a-1ccd-4a75-a507-b83711dd34f5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:05:37.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5280" for this suite.
Feb  5 14:05:43.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:05:43.801: INFO: namespace downward-api-5280 deletion completed in 6.196246657s

• [SLOW TEST:20.101 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:05:43.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  5 14:05:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4305'
Feb  5 14:05:44.233: INFO: stderr: ""
Feb  5 14:05:44.234: INFO: stdout: "pod/pause created\n"
Feb  5 14:05:44.234: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  5 14:05:44.234: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4305" to be "running and ready"
Feb  5 14:05:44.247: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.802893ms
Feb  5 14:05:46.257: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023207804s
Feb  5 14:05:48.268: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034084245s
Feb  5 14:05:50.275: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041444292s
Feb  5 14:05:52.281: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.047461265s
Feb  5 14:05:52.281: INFO: Pod "pause" satisfied condition "running and ready"
Feb  5 14:05:52.281: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  5 14:05:52.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4305'
Feb  5 14:05:52.403: INFO: stderr: ""
Feb  5 14:05:52.403: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  5 14:05:52.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4305'
Feb  5 14:05:52.504: INFO: stderr: ""
Feb  5 14:05:52.504: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  5 14:05:52.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4305'
Feb  5 14:05:52.672: INFO: stderr: ""
Feb  5 14:05:52.673: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  5 14:05:52.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4305'
Feb  5 14:05:52.745: INFO: stderr: ""
Feb  5 14:05:52.745: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  5 14:05:52.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4305'
Feb  5 14:05:52.850: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 14:05:52.850: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  5 14:05:52.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4305'
Feb  5 14:05:52.984: INFO: stderr: "No resources found.\n"
Feb  5 14:05:52.984: INFO: stdout: ""
Feb  5 14:05:52.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4305 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  5 14:05:53.115: INFO: stderr: ""
Feb  5 14:05:53.115: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:05:53.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4305" for this suite.
Feb  5 14:05:59.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:05:59.257: INFO: namespace kubectl-4305 deletion completed in 6.130405458s

• [SLOW TEST:15.456 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:05:59.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  5 14:05:59.373: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:06:16.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-413" for this suite.
Feb  5 14:06:22.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:06:22.686: INFO: namespace pods-413 deletion completed in 6.147479892s

• [SLOW TEST:23.428 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:06:22.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9491
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  5 14:06:22.878: INFO: Found 0 stateful pods, waiting for 3
Feb  5 14:06:32.890: INFO: Found 2 stateful pods, waiting for 3
Feb  5 14:06:42.896: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:06:42.896: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:06:42.896: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  5 14:06:52.889: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:06:52.889: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:06:52.889: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  5 14:06:52.920: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  5 14:07:02.985: INFO: Updating stateful set ss2
Feb  5 14:07:03.039: INFO: Waiting for Pod statefulset-9491/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  5 14:07:13.242: INFO: Found 2 stateful pods, waiting for 3
Feb  5 14:07:23.250: INFO: Found 2 stateful pods, waiting for 3
Feb  5 14:07:33.257: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:07:33.257: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:07:33.257: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  5 14:07:33.288: INFO: Updating stateful set ss2
Feb  5 14:07:33.297: INFO: Waiting for Pod statefulset-9491/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  5 14:07:43.596: INFO: Updating stateful set ss2
Feb  5 14:07:43.853: INFO: Waiting for StatefulSet statefulset-9491/ss2 to complete update
Feb  5 14:07:43.854: INFO: Waiting for Pod statefulset-9491/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  5 14:07:53.880: INFO: Waiting for StatefulSet statefulset-9491/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  5 14:08:03.878: INFO: Deleting all statefulset in ns statefulset-9491
Feb  5 14:08:03.891: INFO: Scaling statefulset ss2 to 0
Feb  5 14:08:33.951: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:08:33.959: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:08:33.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9491" for this suite.
Feb  5 14:08:42.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:08:42.145: INFO: namespace statefulset-9491 deletion completed in 8.153260866s

• [SLOW TEST:139.459 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:08:42.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:09:08.334: INFO: Container started at 2020-02-05 14:08:48 +0000 UTC, pod became ready at 2020-02-05 14:09:06 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:09:08.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5314" for this suite.
Feb  5 14:09:30.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:09:30.474: INFO: namespace container-probe-5314 deletion completed in 22.134883719s

• [SLOW TEST:48.329 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:09:30.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-f4374e4d-0227-4f4b-85e3-d7fc689759fe
STEP: Creating a pod to test consume configMaps
Feb  5 14:09:30.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897" in namespace "configmap-4280" to be "success or failure"
Feb  5 14:09:30.630: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277751ms
Feb  5 14:09:32.640: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018649341s
Feb  5 14:09:34.645: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023471937s
Feb  5 14:09:36.654: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032826631s
Feb  5 14:09:38.667: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045270271s
STEP: Saw pod success
Feb  5 14:09:38.667: INFO: Pod "pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897" satisfied condition "success or failure"
Feb  5 14:09:38.671: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897 container configmap-volume-test: 
STEP: delete the pod
Feb  5 14:09:38.735: INFO: Waiting for pod pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897 to disappear
Feb  5 14:09:38.739: INFO: Pod pod-configmaps-4a8072db-b4b0-458f-80fc-a54eac816897 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:09:38.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4280" for this suite.
Feb  5 14:09:44.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:09:44.899: INFO: namespace configmap-4280 deletion completed in 6.156364212s

• [SLOW TEST:14.425 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:09:44.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-47d9b726-86d4-45c1-9b12-a60541069692
STEP: Creating a pod to test consume secrets
Feb  5 14:09:44.996: INFO: Waiting up to 5m0s for pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92" in namespace "secrets-2853" to be "success or failure"
Feb  5 14:09:45.103: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92": Phase="Pending", Reason="", readiness=false. Elapsed: 106.206121ms
Feb  5 14:09:47.111: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114830685s
Feb  5 14:09:49.124: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127826151s
Feb  5 14:09:51.137: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140407742s
Feb  5 14:09:53.144: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147913792s
STEP: Saw pod success
Feb  5 14:09:53.145: INFO: Pod "pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92" satisfied condition "success or failure"
Feb  5 14:09:53.151: INFO: Trying to get logs from node iruya-node pod pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92 container secret-volume-test: 
STEP: delete the pod
Feb  5 14:09:53.257: INFO: Waiting for pod pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92 to disappear
Feb  5 14:09:53.271: INFO: Pod pod-secrets-2767640a-8720-4985-964c-2955d6ee4f92 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:09:53.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2853" for this suite.
Feb  5 14:09:59.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:09:59.448: INFO: namespace secrets-2853 deletion completed in 6.17065809s

• [SLOW TEST:14.548 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:09:59.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  5 14:09:59.512: INFO: Waiting up to 5m0s for pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a" in namespace "emptydir-7401" to be "success or failure"
Feb  5 14:09:59.569: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.920157ms
Feb  5 14:10:01.577: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065337709s
Feb  5 14:10:03.587: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074669628s
Feb  5 14:10:05.871: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359055252s
Feb  5 14:10:07.882: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.370277696s
Feb  5 14:10:09.892: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.380181104s
STEP: Saw pod success
Feb  5 14:10:09.892: INFO: Pod "pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a" satisfied condition "success or failure"
Feb  5 14:10:09.898: INFO: Trying to get logs from node iruya-node pod pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a container test-container: 
STEP: delete the pod
Feb  5 14:10:10.062: INFO: Waiting for pod pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a to disappear
Feb  5 14:10:10.066: INFO: Pod pod-d010ec0d-6417-4a21-b70d-c4c9ffa15a3a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:10:10.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7401" for this suite.
Feb  5 14:10:16.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:10:16.190: INFO: namespace emptydir-7401 deletion completed in 6.118938524s

• [SLOW TEST:16.741 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:10:16.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:10:16.635: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.725509ms)
Feb  5 14:10:16.640: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.252226ms)
Feb  5 14:10:16.644: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.790026ms)
Feb  5 14:10:16.650: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.271541ms)
Feb  5 14:10:16.757: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 106.814117ms)
Feb  5 14:10:16.768: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.052268ms)
Feb  5 14:10:16.779: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.031158ms)
Feb  5 14:10:16.791: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.508938ms)
Feb  5 14:10:16.801: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.287985ms)
Feb  5 14:10:16.808: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.493857ms)
Feb  5 14:10:16.816: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.769523ms)
Feb  5 14:10:16.825: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.907929ms)
Feb  5 14:10:16.835: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.485131ms)
Feb  5 14:10:16.841: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.759106ms)
Feb  5 14:10:16.849: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.567437ms)
Feb  5 14:10:16.860: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.78016ms)
Feb  5 14:10:16.870: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.862312ms)
Feb  5 14:10:16.879: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.900966ms)
Feb  5 14:10:16.884: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.853176ms)
Feb  5 14:10:16.893: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.282117ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:10:16.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9923" for this suite.
Feb  5 14:10:22.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:10:23.058: INFO: namespace proxy-9923 deletion completed in 6.156921411s

• [SLOW TEST:6.868 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:10:23.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-e0480c13-6d5c-4d3e-b581-a41942037ee5
STEP: Creating a pod to test consume configMaps
Feb  5 14:10:23.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091" in namespace "configmap-3027" to be "success or failure"
Feb  5 14:10:23.233: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Pending", Reason="", readiness=false. Elapsed: 32.887193ms
Feb  5 14:10:25.466: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.265506662s
Feb  5 14:10:27.472: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271693145s
Feb  5 14:10:29.484: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284093453s
Feb  5 14:10:31.492: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Pending", Reason="", readiness=false. Elapsed: 8.291822852s
Feb  5 14:10:33.499: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.299283467s
STEP: Saw pod success
Feb  5 14:10:33.499: INFO: Pod "pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091" satisfied condition "success or failure"
Feb  5 14:10:33.504: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091 container configmap-volume-test: 
STEP: delete the pod
Feb  5 14:10:33.559: INFO: Waiting for pod pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091 to disappear
Feb  5 14:10:33.567: INFO: Pod pod-configmaps-f8d2122e-f15d-4392-80ca-329d606e7091 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:10:33.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3027" for this suite.
Feb  5 14:10:39.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:10:39.719: INFO: namespace configmap-3027 deletion completed in 6.146519034s

• [SLOW TEST:16.661 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:10:39.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  5 14:10:39.804: INFO: Waiting up to 5m0s for pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b" in namespace "emptydir-4406" to be "success or failure"
Feb  5 14:10:39.856: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b": Phase="Pending", Reason="", readiness=false. Elapsed: 51.241862ms
Feb  5 14:10:41.878: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073031157s
Feb  5 14:10:43.911: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106229345s
Feb  5 14:10:45.918: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113569531s
Feb  5 14:10:47.926: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121206769s
STEP: Saw pod success
Feb  5 14:10:47.926: INFO: Pod "pod-ec26ce8d-9d0f-4128-adca-95006a28601b" satisfied condition "success or failure"
Feb  5 14:10:47.930: INFO: Trying to get logs from node iruya-node pod pod-ec26ce8d-9d0f-4128-adca-95006a28601b container test-container: 
STEP: delete the pod
Feb  5 14:10:48.035: INFO: Waiting for pod pod-ec26ce8d-9d0f-4128-adca-95006a28601b to disappear
Feb  5 14:10:48.050: INFO: Pod pod-ec26ce8d-9d0f-4128-adca-95006a28601b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:10:48.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4406" for this suite.
Feb  5 14:10:54.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:10:54.178: INFO: namespace emptydir-4406 deletion completed in 6.120736378s

• [SLOW TEST:14.458 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:10:54.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 14:10:54.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-891'
Feb  5 14:10:56.244: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  5 14:10:56.244: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  5 14:10:56.263: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb  5 14:10:56.317: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  5 14:10:56.335: INFO: scanned /root for discovery docs: 
Feb  5 14:10:56.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-891'
Feb  5 14:11:17.499: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  5 14:11:17.499: INFO: stdout: "Created e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1\nScaling up e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  5 14:11:17.499: INFO: stdout: "Created e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1\nScaling up e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  5 14:11:17.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-891'
Feb  5 14:11:17.649: INFO: stderr: ""
Feb  5 14:11:17.649: INFO: stdout: "e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1-vxktg "
Feb  5 14:11:17.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1-vxktg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-891'
Feb  5 14:11:17.755: INFO: stderr: ""
Feb  5 14:11:17.755: INFO: stdout: "true"
Feb  5 14:11:17.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1-vxktg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-891'
Feb  5 14:11:17.829: INFO: stderr: ""
Feb  5 14:11:17.829: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  5 14:11:17.829: INFO: e2e-test-nginx-rc-d2088c540f6e9b7557598d0968e1b2a1-vxktg is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  5 14:11:17.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-891'
Feb  5 14:11:17.987: INFO: stderr: ""
Feb  5 14:11:17.987: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:11:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-891" for this suite.
Feb  5 14:11:24.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:11:24.222: INFO: namespace kubectl-891 deletion completed in 6.211319556s

• [SLOW TEST:30.044 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:11:24.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 14:11:24.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-694'
Feb  5 14:11:24.466: INFO: stderr: ""
Feb  5 14:11:24.466: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  5 14:11:24.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-694'
Feb  5 14:11:36.532: INFO: stderr: ""
Feb  5 14:11:36.532: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:11:36.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-694" for this suite.
Feb  5 14:11:42.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:11:42.710: INFO: namespace kubectl-694 deletion completed in 6.169443045s

• [SLOW TEST:18.487 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:11:42.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3328/configmap-test-a0faf47c-b3cc-40b4-ba81-db6f4a92c573
STEP: Creating a pod to test consume configMaps
Feb  5 14:11:42.784: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20" in namespace "configmap-3328" to be "success or failure"
Feb  5 14:11:42.794: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 9.324643ms
Feb  5 14:11:44.803: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018956663s
Feb  5 14:11:46.815: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030369595s
Feb  5 14:11:48.826: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04165511s
Feb  5 14:11:50.842: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057335857s
STEP: Saw pod success
Feb  5 14:11:50.842: INFO: Pod "pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20" satisfied condition "success or failure"
Feb  5 14:11:50.850: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20 container env-test: 
STEP: delete the pod
Feb  5 14:11:50.972: INFO: Waiting for pod pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20 to disappear
Feb  5 14:11:51.024: INFO: Pod pod-configmaps-6f94a1f8-d543-4b10-b2fc-baacb5dd2f20 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:11:51.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3328" for this suite.
Feb  5 14:11:57.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:11:57.201: INFO: namespace configmap-3328 deletion completed in 6.12719437s

• [SLOW TEST:14.490 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:11:57.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6aef706e-f1fa-4aaf-be5b-cc24e8a7417b
STEP: Creating a pod to test consume secrets
Feb  5 14:11:57.326: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509" in namespace "projected-4043" to be "success or failure"
Feb  5 14:11:57.357: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Pending", Reason="", readiness=false. Elapsed: 31.54507ms
Feb  5 14:11:59.366: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040581319s
Feb  5 14:12:01.372: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046251421s
Feb  5 14:12:03.389: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062775039s
Feb  5 14:12:05.584: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258448703s
Feb  5 14:12:07.598: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.272307018s
STEP: Saw pod success
Feb  5 14:12:07.598: INFO: Pod "pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509" satisfied condition "success or failure"
Feb  5 14:12:07.605: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509 container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 14:12:07.679: INFO: Waiting for pod pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509 to disappear
Feb  5 14:12:07.689: INFO: Pod pod-projected-secrets-027660af-4e86-4f5b-9a8c-c0fdc7d52509 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:12:07.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4043" for this suite.
Feb  5 14:12:13.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:12:13.902: INFO: namespace projected-4043 deletion completed in 6.205043178s

• [SLOW TEST:16.701 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:12:13.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-08918d0d-8e27-4d78-b971-0360e2c850c8
STEP: Creating a pod to test consume secrets
Feb  5 14:12:14.195: INFO: Waiting up to 5m0s for pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95" in namespace "secrets-27" to be "success or failure"
Feb  5 14:12:14.200: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.52945ms
Feb  5 14:12:16.212: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017190584s
Feb  5 14:12:18.218: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022845707s
Feb  5 14:12:20.225: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029654618s
Feb  5 14:12:22.239: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044084474s
STEP: Saw pod success
Feb  5 14:12:22.239: INFO: Pod "pod-secrets-b08049bf-09e8-46c6-a677-492154128c95" satisfied condition "success or failure"
Feb  5 14:12:22.248: INFO: Trying to get logs from node iruya-node pod pod-secrets-b08049bf-09e8-46c6-a677-492154128c95 container secret-volume-test: 
STEP: delete the pod
Feb  5 14:12:22.365: INFO: Waiting for pod pod-secrets-b08049bf-09e8-46c6-a677-492154128c95 to disappear
Feb  5 14:12:22.369: INFO: Pod pod-secrets-b08049bf-09e8-46c6-a677-492154128c95 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:12:22.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-27" for this suite.
Feb  5 14:12:28.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:12:28.570: INFO: namespace secrets-27 deletion completed in 6.194155736s
STEP: Destroying namespace "secret-namespace-4460" for this suite.
Feb  5 14:12:34.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:12:34.770: INFO: namespace secret-namespace-4460 deletion completed in 6.199853331s

• [SLOW TEST:20.867 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:12:34.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-ca0287a1-d297-4374-9c3a-7c816e23377b
STEP: Creating secret with name secret-projected-all-test-volume-c430dd13-f081-428c-8664-473fb68a5eeb
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  5 14:12:34.939: INFO: Waiting up to 5m0s for pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7" in namespace "projected-3806" to be "success or failure"
Feb  5 14:12:34.950: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640941ms
Feb  5 14:12:36.998: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058760356s
Feb  5 14:12:39.071: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132035757s
Feb  5 14:12:41.078: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139033067s
Feb  5 14:12:43.085: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.145361961s
STEP: Saw pod success
Feb  5 14:12:43.085: INFO: Pod "projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7" satisfied condition "success or failure"
Feb  5 14:12:43.092: INFO: Trying to get logs from node iruya-node pod projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7 container projected-all-volume-test: 
STEP: delete the pod
Feb  5 14:12:43.194: INFO: Waiting for pod projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7 to disappear
Feb  5 14:12:43.200: INFO: Pod projected-volume-ce7e3ec6-638e-48c9-aad5-908cfcef5ae7 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:12:43.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3806" for this suite.
Feb  5 14:12:49.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:12:49.461: INFO: namespace projected-3806 deletion completed in 6.255121518s

• [SLOW TEST:14.690 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:12:49.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  5 14:12:49.593: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  5 14:12:50.179: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  5 14:12:52.497: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:12:54.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:12:56.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:12:58.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:13:00.509: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716508770, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:13:06.184: INFO: Waited 3.65665143s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:13:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-8378" for this suite.
Feb  5 14:13:12.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:13:12.984: INFO: namespace aggregator-8378 deletion completed in 6.169387208s

• [SLOW TEST:23.522 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:13:12.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  5 14:13:13.262: INFO: namespace kubectl-3821
Feb  5 14:13:13.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3821'
Feb  5 14:13:13.994: INFO: stderr: ""
Feb  5 14:13:13.994: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  5 14:13:15.002: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:15.002: INFO: Found 0 / 1
Feb  5 14:13:16.021: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:16.021: INFO: Found 0 / 1
Feb  5 14:13:17.009: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:17.010: INFO: Found 0 / 1
Feb  5 14:13:18.028: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:18.028: INFO: Found 0 / 1
Feb  5 14:13:19.006: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:19.006: INFO: Found 0 / 1
Feb  5 14:13:20.003: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:20.003: INFO: Found 0 / 1
Feb  5 14:13:21.023: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:21.023: INFO: Found 0 / 1
Feb  5 14:13:22.029: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:22.029: INFO: Found 0 / 1
Feb  5 14:13:23.004: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:23.004: INFO: Found 1 / 1
Feb  5 14:13:23.004: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  5 14:13:23.011: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 14:13:23.011: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  5 14:13:23.011: INFO: wait on redis-master startup in kubectl-3821 
Feb  5 14:13:23.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ptcnq redis-master --namespace=kubectl-3821'
Feb  5 14:13:23.215: INFO: stderr: ""
Feb  5 14:13:23.216: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Feb 14:13:20.828 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Feb 14:13:20.828 # Server started, Redis version 3.2.12\n1:M 05 Feb 14:13:20.829 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Feb 14:13:20.829 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  5 14:13:23.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3821'
Feb  5 14:13:23.469: INFO: stderr: ""
Feb  5 14:13:23.469: INFO: stdout: "service/rm2 exposed\n"
Feb  5 14:13:23.474: INFO: Service rm2 in namespace kubectl-3821 found.
STEP: exposing service
Feb  5 14:13:25.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3821'
Feb  5 14:13:25.767: INFO: stderr: ""
Feb  5 14:13:25.767: INFO: stdout: "service/rm3 exposed\n"
Feb  5 14:13:25.775: INFO: Service rm3 in namespace kubectl-3821 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:13:27.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3821" for this suite.
Feb  5 14:13:51.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:13:52.064: INFO: namespace kubectl-3821 deletion completed in 24.262877549s

• [SLOW TEST:39.079 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:13:52.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:13:52.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-963" for this suite.
Feb  5 14:14:16.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:14:16.442: INFO: namespace pods-963 deletion completed in 24.160944663s

• [SLOW TEST:24.378 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:14:16.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:14:16.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:14:27.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6651" for this suite.
Feb  5 14:15:08.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:15:08.221: INFO: namespace pods-6651 deletion completed in 40.978107519s

• [SLOW TEST:51.779 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:15:08.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  5 14:18:07.523: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:07.543: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:09.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:09.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:11.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:11.551: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:13.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:13.551: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:15.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:15.551: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:17.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:17.558: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:19.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:19.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:21.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:21.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:23.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:23.555: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:25.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:25.555: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:27.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:27.561: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:29.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:29.553: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:31.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:31.559: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:33.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:33.552: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:35.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:35.552: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 14:18:37.544: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 14:18:37.556: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:18:37.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8236" for this suite.
Feb  5 14:18:59.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:18:59.715: INFO: namespace container-lifecycle-hook-8236 deletion completed in 22.15034121s

• [SLOW TEST:231.492 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:18:59.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  5 14:18:59.789: INFO: Waiting up to 5m0s for pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b" in namespace "emptydir-3535" to be "success or failure"
Feb  5 14:18:59.797: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42062ms
Feb  5 14:19:01.812: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023498465s
Feb  5 14:19:03.824: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035302767s
Feb  5 14:19:05.838: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049596757s
Feb  5 14:19:07.856: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066895316s
STEP: Saw pod success
Feb  5 14:19:07.856: INFO: Pod "pod-456395ce-51ab-448a-a9ad-48d51b37eb0b" satisfied condition "success or failure"
Feb  5 14:19:07.884: INFO: Trying to get logs from node iruya-node pod pod-456395ce-51ab-448a-a9ad-48d51b37eb0b container test-container: 
STEP: delete the pod
Feb  5 14:19:07.965: INFO: Waiting for pod pod-456395ce-51ab-448a-a9ad-48d51b37eb0b to disappear
Feb  5 14:19:07.972: INFO: Pod pod-456395ce-51ab-448a-a9ad-48d51b37eb0b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:19:07.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3535" for this suite.
Feb  5 14:19:14.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:19:14.068: INFO: namespace emptydir-3535 deletion completed in 6.09008222s

• [SLOW TEST:14.353 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:19:14.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-8dfa41a9-373b-4fe6-858a-8ff8c0a3f0e4
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-8dfa41a9-373b-4fe6-858a-8ff8c0a3f0e4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:20:54.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4404" for this suite.
Feb  5 14:21:16.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:21:16.204: INFO: namespace configmap-4404 deletion completed in 22.146614537s

• [SLOW TEST:122.136 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:21:16.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  5 14:21:16.296: INFO: Waiting up to 5m0s for pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5" in namespace "downward-api-7917" to be "success or failure"
Feb  5 14:21:16.313: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.301284ms
Feb  5 14:21:18.568: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272103082s
Feb  5 14:21:20.580: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28407408s
Feb  5 14:21:22.592: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296004841s
Feb  5 14:21:24.602: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.306060377s
Feb  5 14:21:26.617: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.320315954s
Feb  5 14:21:28.634: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.337492063s
STEP: Saw pod success
Feb  5 14:21:28.634: INFO: Pod "downward-api-6029dda6-f41e-473b-af26-84790aed55d5" satisfied condition "success or failure"
Feb  5 14:21:28.640: INFO: Trying to get logs from node iruya-node pod downward-api-6029dda6-f41e-473b-af26-84790aed55d5 container dapi-container: 
STEP: delete the pod
Feb  5 14:21:28.794: INFO: Waiting for pod downward-api-6029dda6-f41e-473b-af26-84790aed55d5 to disappear
Feb  5 14:21:28.800: INFO: Pod downward-api-6029dda6-f41e-473b-af26-84790aed55d5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:21:28.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7917" for this suite.
Feb  5 14:21:34.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:21:34.946: INFO: namespace downward-api-7917 deletion completed in 6.140264229s

• [SLOW TEST:18.742 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:21:34.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  5 14:21:35.130: INFO: Waiting up to 5m0s for pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f" in namespace "emptydir-8909" to be "success or failure"
Feb  5 14:21:35.144: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.481633ms
Feb  5 14:21:37.159: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028626215s
Feb  5 14:21:39.167: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037205294s
Feb  5 14:21:41.175: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044549429s
Feb  5 14:21:43.181: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050641475s
STEP: Saw pod success
Feb  5 14:21:43.181: INFO: Pod "pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f" satisfied condition "success or failure"
Feb  5 14:21:43.184: INFO: Trying to get logs from node iruya-node pod pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f container test-container: 
STEP: delete the pod
Feb  5 14:21:43.230: INFO: Waiting for pod pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f to disappear
Feb  5 14:21:43.250: INFO: Pod pod-4dc70ed3-5e73-44e2-9054-1836b84dd53f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:21:43.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8909" for this suite.
Feb  5 14:21:49.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:21:49.536: INFO: namespace emptydir-8909 deletion completed in 6.278995632s

• [SLOW TEST:14.589 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:21:49.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  5 14:21:57.647: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f05ab8b4-6b08-4d7e-a928-5788bef3f0a5,GenerateName:,Namespace:events-7054,SelfLink:/api/v1/namespaces/events-7054/pods/send-events-f05ab8b4-6b08-4d7e-a928-5788bef3f0a5,UID:f2816b28-a99d-4fa8-b532-0060f16f0f82,ResourceVersion:23203174,Generation:0,CreationTimestamp:2020-02-05 14:21:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 605517136,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-brpdj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-brpdj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-brpdj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001cdf7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001cdf800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:21:49 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:21:57 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:21:57 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:21:49 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-05 14:21:49 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-05 14:21:56 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://4b90f8b487d18d236e273378599cd9dc1f1608f5955bff55d0158ad60fd20ec2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  5 14:21:59.659: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  5 14:22:01.668: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:22:01.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7054" for this suite.
Feb  5 14:22:41.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:22:41.932: INFO: namespace events-7054 deletion completed in 40.239369606s

• [SLOW TEST:52.396 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:22:41.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-1540b9e4-a5b1-49d2-9e70-a535f1754af5
STEP: Creating a pod to test consume secrets
Feb  5 14:22:42.082: INFO: Waiting up to 5m0s for pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548" in namespace "secrets-1604" to be "success or failure"
Feb  5 14:22:42.092: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548": Phase="Pending", Reason="", readiness=false. Elapsed: 10.29565ms
Feb  5 14:22:44.107: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024564091s
Feb  5 14:22:46.114: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032152106s
Feb  5 14:22:48.121: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038703448s
Feb  5 14:22:50.127: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.044528239s
STEP: Saw pod success
Feb  5 14:22:50.127: INFO: Pod "pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548" satisfied condition "success or failure"
Feb  5 14:22:50.130: INFO: Trying to get logs from node iruya-node pod pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548 container secret-volume-test: 
STEP: delete the pod
Feb  5 14:22:50.185: INFO: Waiting for pod pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548 to disappear
Feb  5 14:22:50.193: INFO: Pod pod-secrets-38c8b890-1cad-44bb-bbed-32bd3c000548 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:22:50.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1604" for this suite.
Feb  5 14:22:56.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:22:56.405: INFO: namespace secrets-1604 deletion completed in 6.20618989s

• [SLOW TEST:14.473 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:22:56.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:22:56.481: INFO: Creating ReplicaSet my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6
Feb  5 14:22:56.528: INFO: Pod name my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6: Found 0 pods out of 1
Feb  5 14:23:01.539: INFO: Pod name my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6: Found 1 pods out of 1
Feb  5 14:23:01.539: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6" is running
Feb  5 14:23:05.551: INFO: Pod "my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6-mg8rb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:22:56 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:22:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:22:56 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:22:56 +0000 UTC Reason: Message:}])
Feb  5 14:23:05.551: INFO: Trying to dial the pod
Feb  5 14:23:10.606: INFO: Controller my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6: Got expected result from replica 1 [my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6-mg8rb]: "my-hostname-basic-14900072-3a7d-41c3-829d-e92cad190aa6-mg8rb", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:23:10.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1230" for this suite.
Feb  5 14:23:16.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:23:16.723: INFO: namespace replicaset-1230 deletion completed in 6.101250435s

• [SLOW TEST:20.317 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:23:16.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 14:23:16.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37" in namespace "downward-api-3550" to be "success or failure"
Feb  5 14:23:17.030: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 50.068433ms
Feb  5 14:23:19.063: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082770461s
Feb  5 14:23:21.074: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093667139s
Feb  5 14:23:23.083: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103488097s
Feb  5 14:23:25.129: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148584717s
Feb  5 14:23:27.140: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Pending", Reason="", readiness=false. Elapsed: 10.159691993s
Feb  5 14:23:29.182: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.202251986s
STEP: Saw pod success
Feb  5 14:23:29.182: INFO: Pod "downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37" satisfied condition "success or failure"
Feb  5 14:23:29.194: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37 container client-container: 
STEP: delete the pod
Feb  5 14:23:29.602: INFO: Waiting for pod downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37 to disappear
Feb  5 14:23:29.609: INFO: Pod downwardapi-volume-7a8365a9-fee5-4c0c-9836-1c41cdbfbc37 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:23:29.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3550" for this suite.
Feb  5 14:23:35.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:23:35.900: INFO: namespace downward-api-3550 deletion completed in 6.283329762s

• [SLOW TEST:19.177 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:23:35.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-fb050040-b08e-4892-865b-de1cbebb4178
STEP: Creating a pod to test consume configMaps
Feb  5 14:23:36.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704" in namespace "projected-3392" to be "success or failure"
Feb  5 14:23:36.179: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704": Phase="Pending", Reason="", readiness=false. Elapsed: 45.032374ms
Feb  5 14:23:38.190: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056260737s
Feb  5 14:23:40.203: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0686819s
Feb  5 14:23:42.209: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074997218s
Feb  5 14:23:44.237: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102452185s
STEP: Saw pod success
Feb  5 14:23:44.237: INFO: Pod "pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704" satisfied condition "success or failure"
Feb  5 14:23:44.244: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 14:23:44.362: INFO: Waiting for pod pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704 to disappear
Feb  5 14:23:44.368: INFO: Pod pod-projected-configmaps-b344e6a3-693f-43de-9089-ec0a9e1ab704 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:23:44.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3392" for this suite.
Feb  5 14:23:50.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:23:50.517: INFO: namespace projected-3392 deletion completed in 6.13985487s

• [SLOW TEST:14.617 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:23:50.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 14:23:58.734: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:23:58.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5211" for this suite.
Feb  5 14:24:04.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:24:05.042: INFO: namespace container-runtime-5211 deletion completed in 6.134213146s

• [SLOW TEST:14.523 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:24:05.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  5 14:24:05.119: INFO: Waiting up to 5m0s for pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2" in namespace "containers-3972" to be "success or failure"
Feb  5 14:24:05.164: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 44.074945ms
Feb  5 14:24:07.175: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055035844s
Feb  5 14:24:09.181: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061796057s
Feb  5 14:24:11.192: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072833457s
Feb  5 14:24:13.200: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080360117s
STEP: Saw pod success
Feb  5 14:24:13.200: INFO: Pod "client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2" satisfied condition "success or failure"
Feb  5 14:24:13.204: INFO: Trying to get logs from node iruya-node pod client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2 container test-container: 
STEP: delete the pod
Feb  5 14:24:13.354: INFO: Waiting for pod client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2 to disappear
Feb  5 14:24:13.369: INFO: Pod client-containers-c963df3d-97a9-44f7-8c79-f83ca423d0c2 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:24:13.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3972" for this suite.
Feb  5 14:24:19.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:24:19.527: INFO: namespace containers-3972 deletion completed in 6.111065394s

• [SLOW TEST:14.485 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:24:19.527: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 14:24:19.605: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404" in namespace "downward-api-8394" to be "success or failure"
Feb  5 14:24:19.615: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034209ms
Feb  5 14:24:21.624: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019101663s
Feb  5 14:24:23.638: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033202337s
Feb  5 14:24:25.649: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043658184s
Feb  5 14:24:27.662: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056939379s
STEP: Saw pod success
Feb  5 14:24:27.662: INFO: Pod "downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404" satisfied condition "success or failure"
Feb  5 14:24:27.667: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404 container client-container: 
STEP: delete the pod
Feb  5 14:24:27.815: INFO: Waiting for pod downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404 to disappear
Feb  5 14:24:27.831: INFO: Pod downwardapi-volume-fc3ef6c7-768e-4633-90d6-8e7d0d32f404 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:24:27.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8394" for this suite.
Feb  5 14:24:34.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:24:34.242: INFO: namespace downward-api-8394 deletion completed in 6.394778743s

• [SLOW TEST:14.715 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:24:34.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5486.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5486.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  5 14:24:46.475: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.499: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.507: INFO: Unable to read wheezy_udp@PodARecord from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.519: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.525: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.531: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.535: INFO: Unable to read jessie_udp@PodARecord from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.540: INFO: Unable to read jessie_tcp@PodARecord from pod dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6: the server could not find the requested resource (get pods dns-test-be722625-0fde-46a9-a367-82b7a62128b6)
Feb  5 14:24:46.540: INFO: Lookups using dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  5 14:24:51.611: INFO: DNS probes using dns-5486/dns-test-be722625-0fde-46a9-a367-82b7a62128b6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:24:51.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5486" for this suite.
Feb  5 14:24:57.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:24:57.950: INFO: namespace dns-5486 deletion completed in 6.223525901s

• [SLOW TEST:23.707 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:24:57.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:24:58.091: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  5 14:25:03.099: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  5 14:25:05.117: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  5 14:25:07.123: INFO: Creating deployment "test-rollover-deployment"
Feb  5 14:25:07.142: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  5 14:25:09.154: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  5 14:25:09.176: INFO: Ensure that both replica sets have 1 created replica
Feb  5 14:25:09.190: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  5 14:25:09.204: INFO: Updating deployment test-rollover-deployment
Feb  5 14:25:09.204: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  5 14:25:11.439: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  5 14:25:11.484: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  5 14:25:11.494: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:11.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:13.505: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:13.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:16.593: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:16.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:17.505: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:17.505: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:19.501: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:19.501: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509509, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:22.584: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:22.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:23.517: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:23.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:25.515: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:25.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:27.512: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:27.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:29.510: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 14:25:29.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509519, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716509507, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:25:31.506: INFO: 
Feb  5 14:25:31.506: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  5 14:25:31.528: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5484,SelfLink:/apis/apps/v1/namespaces/deployment-5484/deployments/test-rollover-deployment,UID:45b26438-124f-4ae6-a362-319801bb2963,ResourceVersion:23203750,Generation:2,CreationTimestamp:2020-02-05 14:25:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-05 14:25:07 +0000 UTC 2020-02-05 14:25:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-05 14:25:30 +0000 UTC 2020-02-05 14:25:07 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  5 14:25:31.573: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5484,SelfLink:/apis/apps/v1/namespaces/deployment-5484/replicasets/test-rollover-deployment-854595fc44,UID:4844f945-ec73-41b1-a2a1-8899ac1d2702,ResourceVersion:23203739,Generation:2,CreationTimestamp:2020-02-05 14:25:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 45b26438-124f-4ae6-a362-319801bb2963 0xc002e4f517 0xc002e4f518}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  5 14:25:31.573: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  5 14:25:31.574: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5484,SelfLink:/apis/apps/v1/namespaces/deployment-5484/replicasets/test-rollover-controller,UID:3d0b7c61-16ea-445c-be5b-16dbf5219fc0,ResourceVersion:23203748,Generation:2,CreationTimestamp:2020-02-05 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 45b26438-124f-4ae6-a362-319801bb2963 0xc002e4f42f 0xc002e4f440}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  5 14:25:31.574: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5484,SelfLink:/apis/apps/v1/namespaces/deployment-5484/replicasets/test-rollover-deployment-9b8b997cf,UID:3e953f8e-1608-49d4-8fa1-657382970072,ResourceVersion:23203704,Generation:2,CreationTimestamp:2020-02-05 14:25:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 45b26438-124f-4ae6-a362-319801bb2963 0xc002e4f5e0 0xc002e4f5e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  5 14:25:31.583: INFO: Pod "test-rollover-deployment-854595fc44-5dlrn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5dlrn,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5484,SelfLink:/api/v1/namespaces/deployment-5484/pods/test-rollover-deployment-854595fc44-5dlrn,UID:2b7ec149-cb12-4641-a87b-4b09e43050f1,ResourceVersion:23203723,Generation:0,CreationTimestamp:2020-02-05 14:25:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 4844f945-ec73-41b1-a2a1-8899ac1d2702 0xc002f7e457 0xc002f7e458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bhmd8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bhmd8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bhmd8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002f7e540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002f7e680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:25:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:25:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:25:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:25:09 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-05 14:25:09 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-05 14:25:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://f4f5084107c1aa022f2874d01e5f157fdd92f4ed6e2a6c6620472f979dc5abdd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:25:31.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5484" for this suite.
Feb  5 14:25:39.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:25:39.749: INFO: namespace deployment-5484 deletion completed in 8.159276334s

• [SLOW TEST:41.798 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:25:39.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  5 14:25:40.467: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:25:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6018" for this suite.
Feb  5 14:26:00.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:26:00.141: INFO: namespace init-container-6018 deletion completed in 6.206908452s

• [SLOW TEST:20.391 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:26:00.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  5 14:26:00.771: INFO: created pod pod-service-account-defaultsa
Feb  5 14:26:00.771: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  5 14:26:00.832: INFO: created pod pod-service-account-mountsa
Feb  5 14:26:00.832: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  5 14:26:00.864: INFO: created pod pod-service-account-nomountsa
Feb  5 14:26:00.864: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  5 14:26:00.890: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  5 14:26:00.890: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  5 14:26:01.346: INFO: created pod pod-service-account-mountsa-mountspec
Feb  5 14:26:01.346: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  5 14:26:01.365: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  5 14:26:01.365: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  5 14:26:01.833: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  5 14:26:01.834: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  5 14:26:02.401: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  5 14:26:02.402: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  5 14:26:02.490: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  5 14:26:02.490: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:26:02.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6617" for this suite.
Feb  5 14:26:39.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:26:40.008: INFO: namespace svcaccounts-6617 deletion completed in 37.379133522s

• [SLOW TEST:39.867 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:26:40.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  5 14:26:40.103: INFO: Waiting up to 5m0s for pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476" in namespace "downward-api-4393" to be "success or failure"
Feb  5 14:26:40.111: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.838008ms
Feb  5 14:26:42.123: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020075407s
Feb  5 14:26:44.133: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030254225s
Feb  5 14:26:46.139: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035994141s
Feb  5 14:26:48.160: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057313203s
Feb  5 14:26:50.168: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06506916s
STEP: Saw pod success
Feb  5 14:26:50.168: INFO: Pod "downward-api-ec490e97-7072-4d20-b9a0-99b52165b476" satisfied condition "success or failure"
Feb  5 14:26:50.170: INFO: Trying to get logs from node iruya-node pod downward-api-ec490e97-7072-4d20-b9a0-99b52165b476 container dapi-container: 
STEP: delete the pod
Feb  5 14:26:50.241: INFO: Waiting for pod downward-api-ec490e97-7072-4d20-b9a0-99b52165b476 to disappear
Feb  5 14:26:50.296: INFO: Pod downward-api-ec490e97-7072-4d20-b9a0-99b52165b476 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:26:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4393" for this suite.
Feb  5 14:26:56.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:26:56.503: INFO: namespace downward-api-4393 deletion completed in 6.198913213s

• [SLOW TEST:16.495 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:26:56.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  5 14:27:05.221: INFO: Successfully updated pod "pod-update-activedeadlineseconds-19045ea5-955b-43df-b651-2305b3539642"
Feb  5 14:27:05.221: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-19045ea5-955b-43df-b651-2305b3539642" in namespace "pods-2050" to be "terminated due to deadline exceeded"
Feb  5 14:27:05.247: INFO: Pod "pod-update-activedeadlineseconds-19045ea5-955b-43df-b651-2305b3539642": Phase="Running", Reason="", readiness=true. Elapsed: 25.980111ms
Feb  5 14:27:07.255: INFO: Pod "pod-update-activedeadlineseconds-19045ea5-955b-43df-b651-2305b3539642": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.034117463s
Feb  5 14:27:07.256: INFO: Pod "pod-update-activedeadlineseconds-19045ea5-955b-43df-b651-2305b3539642" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:27:07.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2050" for this suite.
Feb  5 14:27:13.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:27:13.435: INFO: namespace pods-2050 deletion completed in 6.169111996s

• [SLOW TEST:16.931 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:27:13.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  5 14:27:13.571: INFO: Waiting up to 5m0s for pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741" in namespace "emptydir-5943" to be "success or failure"
Feb  5 14:27:13.629: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Pending", Reason="", readiness=false. Elapsed: 57.973362ms
Feb  5 14:27:15.637: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066438956s
Feb  5 14:27:17.651: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080334106s
Feb  5 14:27:19.658: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086893106s
Feb  5 14:27:21.667: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096072347s
Feb  5 14:27:23.677: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10625637s
STEP: Saw pod success
Feb  5 14:27:23.677: INFO: Pod "pod-4d18f9fc-7ba6-444d-be44-96555768d741" satisfied condition "success or failure"
Feb  5 14:27:23.683: INFO: Trying to get logs from node iruya-node pod pod-4d18f9fc-7ba6-444d-be44-96555768d741 container test-container: 
STEP: delete the pod
Feb  5 14:27:23.839: INFO: Waiting for pod pod-4d18f9fc-7ba6-444d-be44-96555768d741 to disappear
Feb  5 14:27:23.844: INFO: Pod pod-4d18f9fc-7ba6-444d-be44-96555768d741 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:27:23.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5943" for this suite.
Feb  5 14:27:29.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:27:30.052: INFO: namespace emptydir-5943 deletion completed in 6.190043236s

• [SLOW TEST:16.616 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:27:30.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  5 14:27:30.340: INFO: Waiting up to 5m0s for pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e" in namespace "downward-api-687" to be "success or failure"
Feb  5 14:27:30.357: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.432787ms
Feb  5 14:27:32.366: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02546025s
Feb  5 14:27:34.373: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03296303s
Feb  5 14:27:36.393: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05201641s
Feb  5 14:27:38.399: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058616202s
STEP: Saw pod success
Feb  5 14:27:38.399: INFO: Pod "downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e" satisfied condition "success or failure"
Feb  5 14:27:38.403: INFO: Trying to get logs from node iruya-node pod downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e container dapi-container: 
STEP: delete the pod
Feb  5 14:27:38.468: INFO: Waiting for pod downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e to disappear
Feb  5 14:27:38.473: INFO: Pod downward-api-0c41363f-828f-478c-ad15-ab28c1cd482e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:27:38.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-687" for this suite.
Feb  5 14:27:44.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:27:44.606: INFO: namespace downward-api-687 deletion completed in 6.126173755s

• [SLOW TEST:14.553 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:27:44.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  5 14:27:44.785: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204201,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 14:27:44.786: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204203,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  5 14:27:44.786: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204204,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  5 14:27:54.876: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204219,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 14:27:54.876: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204220,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  5 14:27:54.876: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6301,SelfLink:/api/v1/namespaces/watch-6301/configmaps/e2e-watch-test-label-changed,UID:43ca58ea-cd51-4a6b-b313-39b745b614d5,ResourceVersion:23204221,Generation:0,CreationTimestamp:2020-02-05 14:27:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:27:54.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6301" for this suite.
Feb  5 14:28:00.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:28:01.039: INFO: namespace watch-6301 deletion completed in 6.154902242s

• [SLOW TEST:16.433 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:28:01.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-5n8c
STEP: Creating a pod to test atomic-volume-subpath
Feb  5 14:28:01.239: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5n8c" in namespace "subpath-2163" to be "success or failure"
Feb  5 14:28:01.297: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Pending", Reason="", readiness=false. Elapsed: 57.449584ms
Feb  5 14:28:03.315: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075472271s
Feb  5 14:28:05.328: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088121555s
Feb  5 14:28:07.337: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097359117s
Feb  5 14:28:09.358: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 8.11850853s
Feb  5 14:28:11.366: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 10.12678417s
Feb  5 14:28:13.395: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 12.155710737s
Feb  5 14:28:15.403: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 14.163845757s
Feb  5 14:28:17.410: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 16.170903983s
Feb  5 14:28:19.424: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 18.184801669s
Feb  5 14:28:21.434: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 20.194961002s
Feb  5 14:28:23.803: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 22.56327094s
Feb  5 14:28:25.813: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 24.573870982s
Feb  5 14:28:28.736: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Running", Reason="", readiness=true. Elapsed: 27.49698483s
Feb  5 14:28:30.745: INFO: Pod "pod-subpath-test-configmap-5n8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.505062246s
STEP: Saw pod success
Feb  5 14:28:30.745: INFO: Pod "pod-subpath-test-configmap-5n8c" satisfied condition "success or failure"
Feb  5 14:28:30.747: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-5n8c container test-container-subpath-configmap-5n8c: 
STEP: delete the pod
Feb  5 14:28:30.830: INFO: Waiting for pod pod-subpath-test-configmap-5n8c to disappear
Feb  5 14:28:30.834: INFO: Pod pod-subpath-test-configmap-5n8c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5n8c
Feb  5 14:28:30.834: INFO: Deleting pod "pod-subpath-test-configmap-5n8c" in namespace "subpath-2163"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:28:30.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2163" for this suite.
Feb  5 14:28:36.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:28:37.146: INFO: namespace subpath-2163 deletion completed in 6.303102841s

• [SLOW TEST:36.106 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:28:37.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-8778
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8778 to expose endpoints map[]
Feb  5 14:28:37.300: INFO: Get endpoints failed (9.203613ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  5 14:28:38.317: INFO: successfully validated that service endpoint-test2 in namespace services-8778 exposes endpoints map[] (1.025970746s elapsed)
STEP: Creating pod pod1 in namespace services-8778
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8778 to expose endpoints map[pod1:[80]]
Feb  5 14:28:42.428: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.094113926s elapsed, will retry)
Feb  5 14:28:45.488: INFO: successfully validated that service endpoint-test2 in namespace services-8778 exposes endpoints map[pod1:[80]] (7.15403483s elapsed)
STEP: Creating pod pod2 in namespace services-8778
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8778 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  5 14:28:49.725: INFO: Unexpected endpoints: found map[4ada0797-77f0-43d7-8f7b-defac40b0738:[80]], expected map[pod1:[80] pod2:[80]] (4.225695296s elapsed, will retry)
Feb  5 14:28:52.843: INFO: successfully validated that service endpoint-test2 in namespace services-8778 exposes endpoints map[pod1:[80] pod2:[80]] (7.344126959s elapsed)
STEP: Deleting pod pod1 in namespace services-8778
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8778 to expose endpoints map[pod2:[80]]
Feb  5 14:28:53.980: INFO: successfully validated that service endpoint-test2 in namespace services-8778 exposes endpoints map[pod2:[80]] (1.100784605s elapsed)
STEP: Deleting pod pod2 in namespace services-8778
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8778 to expose endpoints map[]
Feb  5 14:28:55.035: INFO: successfully validated that service endpoint-test2 in namespace services-8778 exposes endpoints map[] (1.047818788s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:28:55.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8778" for this suite.
Feb  5 14:29:17.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:29:17.702: INFO: namespace services-8778 deletion completed in 22.089288745s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.556 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:29:17.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 14:29:17.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50" in namespace "downward-api-2430" to be "success or failure"
Feb  5 14:29:17.900: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.507836ms
Feb  5 14:29:19.909: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017903877s
Feb  5 14:29:21.917: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025513822s
Feb  5 14:29:23.927: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036407342s
Feb  5 14:29:25.937: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045756443s
Feb  5 14:29:27.945: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053554729s
STEP: Saw pod success
Feb  5 14:29:27.945: INFO: Pod "downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50" satisfied condition "success or failure"
Feb  5 14:29:27.949: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50 container client-container: 
STEP: delete the pod
Feb  5 14:29:28.028: INFO: Waiting for pod downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50 to disappear
Feb  5 14:29:28.050: INFO: Pod downwardapi-volume-3669e981-aed6-418c-8583-14b69d510d50 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:29:28.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2430" for this suite.
Feb  5 14:29:34.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:29:34.423: INFO: namespace downward-api-2430 deletion completed in 6.367892075s

• [SLOW TEST:16.721 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:29:34.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-716e2c35-8b48-446a-b5e3-d61aadc7302c
STEP: Creating secret with name s-test-opt-upd-9b5739e6-8e34-4642-9909-d8886ebd34ec
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-716e2c35-8b48-446a-b5e3-d61aadc7302c
STEP: Updating secret s-test-opt-upd-9b5739e6-8e34-4642-9909-d8886ebd34ec
STEP: Creating secret with name s-test-opt-create-4024b147-14a3-475e-95d2-18a9ceb56a48
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:31:07.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4385" for this suite.
Feb  5 14:31:29.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:31:29.488: INFO: namespace secrets-4385 deletion completed in 22.204594476s

• [SLOW TEST:115.064 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:31:29.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  5 14:31:29.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8859 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  5 14:31:42.610: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0205 14:31:40.859838    2950 log.go:172] (0xc00013cf20) (0xc0005c0460) Create stream\nI0205 14:31:40.860305    2950 log.go:172] (0xc00013cf20) (0xc0005c0460) Stream added, broadcasting: 1\nI0205 14:31:40.949876    2950 log.go:172] (0xc00013cf20) Reply frame received for 1\nI0205 14:31:40.950074    2950 log.go:172] (0xc00013cf20) (0xc0005299a0) Create stream\nI0205 14:31:40.950099    2950 log.go:172] (0xc00013cf20) (0xc0005299a0) Stream added, broadcasting: 3\nI0205 14:31:40.955833    2950 log.go:172] (0xc00013cf20) Reply frame received for 3\nI0205 14:31:40.956142    2950 log.go:172] (0xc00013cf20) (0xc0007040a0) Create stream\nI0205 14:31:40.956190    2950 log.go:172] (0xc00013cf20) (0xc0007040a0) Stream added, broadcasting: 5\nI0205 14:31:40.959962    2950 log.go:172] (0xc00013cf20) Reply frame received for 5\nI0205 14:31:40.960091    2950 log.go:172] (0xc00013cf20) (0xc0005c0140) Create stream\nI0205 14:31:40.960125    2950 log.go:172] (0xc00013cf20) (0xc0005c0140) Stream added, broadcasting: 7\nI0205 14:31:40.964504    2950 log.go:172] (0xc00013cf20) Reply frame received for 7\nI0205 14:31:40.965203    2950 log.go:172] (0xc0005299a0) (3) Writing data frame\nI0205 14:31:40.965650    2950 log.go:172] (0xc0005299a0) (3) Writing data frame\nI0205 14:31:40.994692    2950 log.go:172] (0xc00013cf20) Data frame received for 5\nI0205 14:31:40.994774    2950 log.go:172] (0xc0007040a0) (5) Data frame handling\nI0205 14:31:40.994797    2950 log.go:172] (0xc0007040a0) (5) Data frame sent\nI0205 14:31:41.003832    2950 log.go:172] (0xc00013cf20) Data frame received for 5\nI0205 14:31:41.003892    2950 log.go:172] (0xc0007040a0) (5) Data frame handling\nI0205 14:31:41.003912    2950 log.go:172] (0xc0007040a0) (5) Data frame sent\nI0205 14:31:42.557449    2950 log.go:172] (0xc00013cf20) (0xc0005299a0) Stream removed, broadcasting: 3\nI0205 14:31:42.557590    2950 log.go:172] (0xc00013cf20) Data frame received for 1\nI0205 14:31:42.557628    2950 log.go:172] (0xc0005c0460) (1) Data frame handling\nI0205 14:31:42.557654    2950 log.go:172] (0xc0005c0460) (1) Data frame sent\nI0205 14:31:42.557688    2950 log.go:172] (0xc00013cf20) (0xc0005c0460) Stream removed, broadcasting: 1\nI0205 14:31:42.557776    2950 log.go:172] (0xc00013cf20) (0xc0007040a0) Stream removed, broadcasting: 5\nI0205 14:31:42.557911    2950 log.go:172] (0xc00013cf20) (0xc0005c0140) Stream removed, broadcasting: 7\nI0205 14:31:42.558025    2950 log.go:172] (0xc00013cf20) (0xc0005c0460) Stream removed, broadcasting: 1\nI0205 14:31:42.558045    2950 log.go:172] (0xc00013cf20) (0xc0005299a0) Stream removed, broadcasting: 3\nI0205 14:31:42.558067    2950 log.go:172] (0xc00013cf20) (0xc0007040a0) Stream removed, broadcasting: 5\nI0205 14:31:42.558087    2950 log.go:172] (0xc00013cf20) (0xc0005c0140) Stream removed, broadcasting: 7\nI0205 14:31:42.558290    2950 log.go:172] (0xc00013cf20) Go away received\n"
Feb  5 14:31:42.610: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:31:44.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8859" for this suite.
Feb  5 14:31:50.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:31:50.777: INFO: namespace kubectl-8859 deletion completed in 6.142759039s

• [SLOW TEST:21.288 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:31:50.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  5 14:31:50.860: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  5 14:31:50.897: INFO: Waiting for terminating namespaces to be deleted...
Feb  5 14:31:50.900: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  5 14:31:50.913: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.913: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 14:31:50.913: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  5 14:31:50.913: INFO: 	Container weave ready: true, restart count 0
Feb  5 14:31:50.913: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 14:31:50.913: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  5 14:31:50.927: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  5 14:31:50.927: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  5 14:31:50.927: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container coredns ready: true, restart count 0
Feb  5 14:31:50.927: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container coredns ready: true, restart count 0
Feb  5 14:31:50.927: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container etcd ready: true, restart count 0
Feb  5 14:31:50.927: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container weave ready: true, restart count 0
Feb  5 14:31:50.927: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 14:31:50.927: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  5 14:31:50.927: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  5 14:31:50.927: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-05556137-c575-4209-bf39-e973f76b4e35 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-05556137-c575-4209-bf39-e973f76b4e35 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-05556137-c575-4209-bf39-e973f76b4e35
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:32:09.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-303" for this suite.
Feb  5 14:32:27.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:32:27.384: INFO: namespace sched-pred-303 deletion completed in 18.139903767s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:36.605 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:32:27.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  5 14:32:34.583: INFO: 10 pods remaining
Feb  5 14:32:34.583: INFO: 0 pods has nil DeletionTimestamp
Feb  5 14:32:34.583: INFO: 
STEP: Gathering metrics
W0205 14:32:35.436781       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  5 14:32:35.437: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:32:35.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5724" for this suite.
Feb  5 14:32:45.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:32:45.825: INFO: namespace gc-5724 deletion completed in 10.38191934s

• [SLOW TEST:18.441 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:32:45.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-498t4 in namespace proxy-5407
I0205 14:32:45.963846       8 runners.go:180] Created replication controller with name: proxy-service-498t4, namespace: proxy-5407, replica count: 1
I0205 14:32:47.014529       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:48.014901       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:49.015377       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:50.015980       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:51.016329       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:52.016601       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:53.016860       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:54.017336       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0205 14:32:55.017695       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:32:56.018345       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:32:57.018872       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:32:58.019170       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:32:59.019665       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:33:00.020181       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:33:01.020812       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:33:02.021143       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0205 14:33:03.021430       8 runners.go:180] proxy-service-498t4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  5 14:33:03.029: INFO: setup took 17.120338195s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  5 14:33:03.063: INFO: (0) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 34.229936ms)
Feb  5 14:33:03.064: INFO: (0) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 34.41116ms)
Feb  5 14:33:03.063: INFO: (0) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 33.971848ms)
Feb  5 14:33:03.064: INFO: (0) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 34.975045ms)
Feb  5 14:33:03.065: INFO: (0) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 35.479734ms)
Feb  5 14:33:03.066: INFO: (0) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 36.258269ms)
Feb  5 14:33:03.071: INFO: (0) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 42.217281ms)
Feb  5 14:33:03.072: INFO: (0) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 42.643544ms)
Feb  5 14:33:03.073: INFO: (0) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 43.277669ms)
Feb  5 14:33:03.073: INFO: (0) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 43.386186ms)
Feb  5 14:33:03.073: INFO: (0) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 43.466447ms)
Feb  5 14:33:03.076: INFO: (0) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 46.939504ms)
Feb  5 14:33:03.078: INFO: (0) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 23.081897ms)
Feb  5 14:33:03.104: INFO: (1) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 23.839278ms)
Feb  5 14:33:03.105: INFO: (1) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 24.487756ms)
Feb  5 14:33:03.105: INFO: (1) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 25.173811ms)
Feb  5 14:33:03.105: INFO: (1) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 24.909451ms)
Feb  5 14:33:03.106: INFO: (1) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 25.685835ms)
Feb  5 14:33:03.106: INFO: (1) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 25.647358ms)
Feb  5 14:33:03.106: INFO: (1) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 26.114755ms)
Feb  5 14:33:03.109: INFO: (1) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 29.091169ms)
Feb  5 14:33:03.110: INFO: (1) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 16.133317ms)
Feb  5 14:33:03.128: INFO: (2) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 16.381556ms)
Feb  5 14:33:03.129: INFO: (2) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 16.643311ms)
Feb  5 14:33:03.129: INFO: (2) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 19.907955ms)
Feb  5 14:33:03.132: INFO: (2) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 20.807894ms)
Feb  5 14:33:03.134: INFO: (2) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 21.709685ms)
Feb  5 14:33:03.134: INFO: (2) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 21.709914ms)
Feb  5 14:33:03.137: INFO: (2) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 25.551053ms)
Feb  5 14:33:03.138: INFO: (2) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 25.715761ms)
Feb  5 14:33:03.139: INFO: (2) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 27.031736ms)
Feb  5 14:33:03.158: INFO: (3) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 17.55461ms)
Feb  5 14:33:03.158: INFO: (3) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 18.152832ms)
Feb  5 14:33:03.158: INFO: (3) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 17.904842ms)
Feb  5 14:33:03.158: INFO: (3) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 17.953093ms)
Feb  5 14:33:03.159: INFO: (3) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 18.174632ms)
Feb  5 14:33:03.159: INFO: (3) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 18.364669ms)
Feb  5 14:33:03.160: INFO: (3) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 20.399193ms)
Feb  5 14:33:03.160: INFO: (3) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 20.439154ms)
Feb  5 14:33:03.160: INFO: (3) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 21.333701ms)
Feb  5 14:33:03.161: INFO: (3) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 20.277555ms)
Feb  5 14:33:03.162: INFO: (3) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 20.847095ms)
Feb  5 14:33:03.163: INFO: (3) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 21.879004ms)
Feb  5 14:33:03.181: INFO: (4) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 17.98148ms)
Feb  5 14:33:03.181: INFO: (4) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 18.223779ms)
Feb  5 14:33:03.182: INFO: (4) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 19.292883ms)
Feb  5 14:33:03.182: INFO: (4) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 21.595615ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 21.874006ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 21.957106ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 21.689866ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 21.983896ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 21.557162ms)
Feb  5 14:33:03.185: INFO: (4) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 22.170181ms)
Feb  5 14:33:03.186: INFO: (4) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 23.37819ms)
Feb  5 14:33:03.189: INFO: (4) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 26.374537ms)
Feb  5 14:33:03.189: INFO: (4) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 26.043635ms)
Feb  5 14:33:03.200: INFO: (5) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 10.676341ms)
Feb  5 14:33:03.200: INFO: (5) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 10.620829ms)
Feb  5 14:33:03.200: INFO: (5) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 10.782072ms)
Feb  5 14:33:03.201: INFO: (5) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 12.474285ms)
Feb  5 14:33:03.202: INFO: (5) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 12.664769ms)
Feb  5 14:33:03.202: INFO: (5) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 12.807705ms)
Feb  5 14:33:03.202: INFO: (5) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 12.941593ms)
Feb  5 14:33:03.203: INFO: (5) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 13.442909ms)
Feb  5 14:33:03.203: INFO: (5) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 13.380276ms)
Feb  5 14:33:03.203: INFO: (5) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 13.651568ms)
Feb  5 14:33:03.203: INFO: (5) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 13.745225ms)
Feb  5 14:33:03.205: INFO: (5) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 15.396733ms)
Feb  5 14:33:03.205: INFO: (5) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 15.639809ms)
Feb  5 14:33:03.213: INFO: (6) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 8.147599ms)
Feb  5 14:33:03.217: INFO: (6) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 11.906061ms)
Feb  5 14:33:03.219: INFO: (6) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 13.327609ms)
Feb  5 14:33:03.219: INFO: (6) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 13.348641ms)
Feb  5 14:33:03.219: INFO: (6) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 13.793194ms)
Feb  5 14:33:03.219: INFO: (6) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 13.878068ms)
Feb  5 14:33:03.220: INFO: (6) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 14.743468ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 15.203188ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 15.377405ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 15.289727ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 15.250743ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 15.717383ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 15.791998ms)
Feb  5 14:33:03.221: INFO: (6) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 15.912052ms)
Feb  5 14:33:03.228: INFO: (6) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 22.913865ms)
Feb  5 14:33:03.233: INFO: (7) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 4.585664ms)
Feb  5 14:33:03.233: INFO: (7) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 4.811628ms)
Feb  5 14:33:03.234: INFO: (7) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 5.168149ms)
Feb  5 14:33:03.236: INFO: (7) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 7.620995ms)
Feb  5 14:33:03.239: INFO: (7) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 10.021842ms)
Feb  5 14:33:03.239: INFO: (7) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 10.123245ms)
Feb  5 14:33:03.239: INFO: (7) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 10.490213ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 11.398902ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 11.768875ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 11.886348ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 11.918212ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 11.863978ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 11.827326ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 11.843711ms)
Feb  5 14:33:03.240: INFO: (7) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 11.886528ms)
Feb  5 14:33:03.241: INFO: (7) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 15.196614ms)
Feb  5 14:33:03.257: INFO: (8) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 15.305458ms)
Feb  5 14:33:03.257: INFO: (8) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 15.975859ms)
Feb  5 14:33:03.257: INFO: (8) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 15.856221ms)
Feb  5 14:33:03.257: INFO: (8) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 15.931995ms)
Feb  5 14:33:03.258: INFO: (8) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 16.140889ms)
Feb  5 14:33:03.263: INFO: (9) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 5.131746ms)
Feb  5 14:33:03.263: INFO: (9) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 5.114704ms)
Feb  5 14:33:03.264: INFO: (9) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 7.467528ms)
Feb  5 14:33:03.266: INFO: (9) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 7.988445ms)
Feb  5 14:33:03.266: INFO: (9) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 8.111826ms)
Feb  5 14:33:03.266: INFO: (9) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 8.084001ms)
Feb  5 14:33:03.266: INFO: (9) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 8.425083ms)
Feb  5 14:33:03.266: INFO: (9) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 8.673317ms)
Feb  5 14:33:03.267: INFO: (9) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 9.102849ms)
Feb  5 14:33:03.268: INFO: (9) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 10.000047ms)
Feb  5 14:33:03.269: INFO: (9) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 11.299374ms)
Feb  5 14:33:03.270: INFO: (9) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 11.963711ms)
Feb  5 14:33:03.270: INFO: (9) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 12.033813ms)
Feb  5 14:33:03.270: INFO: (9) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 12.40221ms)
Feb  5 14:33:03.272: INFO: (9) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 13.886157ms)
Feb  5 14:33:03.284: INFO: (10) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 12.083443ms)
Feb  5 14:33:03.285: INFO: (10) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 12.797767ms)
Feb  5 14:33:03.285: INFO: (10) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 13.483569ms)
Feb  5 14:33:03.285: INFO: (10) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 13.462914ms)
Feb  5 14:33:03.285: INFO: (10) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 13.47598ms)
Feb  5 14:33:03.286: INFO: (10) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 14.173914ms)
Feb  5 14:33:03.286: INFO: (10) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 14.439649ms)
Feb  5 14:33:03.286: INFO: (10) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 14.670171ms)
Feb  5 14:33:03.286: INFO: (10) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 14.636047ms)
Feb  5 14:33:03.287: INFO: (10) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 14.667198ms)
Feb  5 14:33:03.287: INFO: (10) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 14.890763ms)
Feb  5 14:33:03.287: INFO: (10) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 14.933139ms)
Feb  5 14:33:03.287: INFO: (10) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 11.01945ms)
Feb  5 14:33:03.299: INFO: (11) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 11.165817ms)
Feb  5 14:33:03.301: INFO: (11) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 13.653903ms)
Feb  5 14:33:03.301: INFO: (11) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 13.53694ms)
Feb  5 14:33:03.301: INFO: (11) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 13.718544ms)
Feb  5 14:33:03.301: INFO: (11) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 13.696062ms)
Feb  5 14:33:03.301: INFO: (11) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 13.756462ms)
Feb  5 14:33:03.302: INFO: (11) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test (200; 10.130175ms)
Feb  5 14:33:03.312: INFO: (12) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 10.137755ms)
Feb  5 14:33:03.312: INFO: (12) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 10.450404ms)
Feb  5 14:33:03.312: INFO: (12) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 10.615388ms)
Feb  5 14:33:03.313: INFO: (12) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 10.893125ms)
Feb  5 14:33:03.313: INFO: (12) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 10.902258ms)
Feb  5 14:33:03.313: INFO: (12) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 11.439975ms)
Feb  5 14:33:03.313: INFO: (12) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 11.733835ms)
Feb  5 14:33:03.314: INFO: (12) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 12.648818ms)
Feb  5 14:33:03.314: INFO: (12) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 10.821927ms)
Feb  5 14:33:03.329: INFO: (13) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 11.81686ms)
Feb  5 14:33:03.329: INFO: (13) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 11.847273ms)
Feb  5 14:33:03.329: INFO: (13) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 12.205018ms)
Feb  5 14:33:03.329: INFO: (13) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 12.312122ms)
Feb  5 14:33:03.329: INFO: (13) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 12.36352ms)
Feb  5 14:33:03.330: INFO: (13) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 12.52001ms)
Feb  5 14:33:03.330: INFO: (13) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 12.511568ms)
Feb  5 14:33:03.330: INFO: (13) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 12.651112ms)
Feb  5 14:33:03.344: INFO: (14) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 12.566575ms)
Feb  5 14:33:03.345: INFO: (14) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 13.476122ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 13.948539ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 14.118494ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 14.031604ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 14.712415ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 14.835065ms)
Feb  5 14:33:03.346: INFO: (14) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 14.79189ms)
Feb  5 14:33:03.347: INFO: (14) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 15.128071ms)
Feb  5 14:33:03.348: INFO: (14) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 15.958809ms)
Feb  5 14:33:03.348: INFO: (14) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 16.127909ms)
Feb  5 14:33:03.348: INFO: (14) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 16.127239ms)
Feb  5 14:33:03.348: INFO: (14) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:1080/proxy/: ... (200; 16.659312ms)
Feb  5 14:33:03.351: INFO: (14) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 18.85862ms)
Feb  5 14:33:03.351: INFO: (14) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: test<... (200; 8.791251ms)
Feb  5 14:33:03.360: INFO: (15) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 8.917462ms)
Feb  5 14:33:03.360: INFO: (15) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 9.330616ms)
Feb  5 14:33:03.360: INFO: (15) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 9.021404ms)
Feb  5 14:33:03.360: INFO: (15) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 9.742225ms)
Feb  5 14:33:03.360: INFO: (15) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 9.430785ms)
Feb  5 14:33:03.361: INFO: (15) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 9.840793ms)
Feb  5 14:33:03.361: INFO: (15) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 9.782603ms)
Feb  5 14:33:03.361: INFO: (15) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 9.781719ms)
Feb  5 14:33:03.361: INFO: (15) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 9.821879ms)
Feb  5 14:33:03.367: INFO: (16) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 6.479808ms)
Feb  5 14:33:03.367: INFO: (16) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 6.493609ms)
Feb  5 14:33:03.371: INFO: (16) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 10.371269ms)
Feb  5 14:33:03.371: INFO: (16) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 10.518943ms)
Feb  5 14:33:03.372: INFO: (16) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 11.004491ms)
Feb  5 14:33:03.372: INFO: (16) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 10.906715ms)
Feb  5 14:33:03.372: INFO: (16) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 11.229045ms)
Feb  5 14:33:03.372: INFO: (16) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 11.18859ms)
Feb  5 14:33:03.373: INFO: (16) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 11.703348ms)
Feb  5 14:33:03.373: INFO: (16) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 11.810428ms)
Feb  5 14:33:03.376: INFO: (16) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 14.842888ms)
Feb  5 14:33:03.376: INFO: (16) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 15.191294ms)
Feb  5 14:33:03.377: INFO: (16) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 15.803176ms)
Feb  5 14:33:03.377: INFO: (16) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 15.75998ms)
Feb  5 14:33:03.377: INFO: (16) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 16.042089ms)
Feb  5 14:33:03.384: INFO: (17) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 7.283399ms)
Feb  5 14:33:03.385: INFO: (17) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 7.475707ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 13.272372ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 13.319968ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 13.44272ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 13.379316ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 13.509827ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 13.804869ms)
Feb  5 14:33:03.391: INFO: (17) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 13.896184ms)
Feb  5 14:33:03.392: INFO: (17) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 15.193017ms)
Feb  5 14:33:03.392: INFO: (17) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 15.320749ms)
Feb  5 14:33:03.399: INFO: (18) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 6.378755ms)
Feb  5 14:33:03.399: INFO: (18) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 6.272224ms)
Feb  5 14:33:03.400: INFO: (18) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 7.311288ms)
Feb  5 14:33:03.400: INFO: (18) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 7.722332ms)
Feb  5 14:33:03.401: INFO: (18) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 8.150819ms)
Feb  5 14:33:03.401: INFO: (18) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 8.213734ms)
Feb  5 14:33:03.401: INFO: (18) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 8.243355ms)
Feb  5 14:33:03.401: INFO: (18) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 8.533017ms)
Feb  5 14:33:03.404: INFO: (18) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 11.224697ms)
Feb  5 14:33:03.404: INFO: (18) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 11.125476ms)
Feb  5 14:33:03.404: INFO: (18) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 11.317197ms)
Feb  5 14:33:03.404: INFO: (18) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 11.551201ms)
Feb  5 14:33:03.404: INFO: (18) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 11.685425ms)
Feb  5 14:33:03.406: INFO: (18) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 13.752357ms)
Feb  5 14:33:03.417: INFO: (19) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:160/proxy/: foo (200; 10.363059ms)
Feb  5 14:33:03.420: INFO: (19) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:460/proxy/: tls baz (200; 13.622402ms)
Feb  5 14:33:03.420: INFO: (19) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:162/proxy/: bar (200; 13.565978ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv:1080/proxy/: test<... (200; 13.686054ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/pods/proxy-service-498t4-8xqdv/proxy/: test (200; 13.787597ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:160/proxy/: foo (200; 13.85887ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:443/proxy/: ... (200; 14.248162ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/pods/https:proxy-service-498t4-8xqdv:462/proxy/: tls qux (200; 14.362122ms)
Feb  5 14:33:03.421: INFO: (19) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname1/proxy/: tls baz (200; 14.797586ms)
Feb  5 14:33:03.423: INFO: (19) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname2/proxy/: bar (200; 16.455401ms)
Feb  5 14:33:03.423: INFO: (19) /api/v1/namespaces/proxy-5407/services/https:proxy-service-498t4:tlsportname2/proxy/: tls qux (200; 16.409575ms)
Feb  5 14:33:03.423: INFO: (19) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname1/proxy/: foo (200; 16.655453ms)
Feb  5 14:33:03.424: INFO: (19) /api/v1/namespaces/proxy-5407/services/http:proxy-service-498t4:portname1/proxy/: foo (200; 16.865467ms)
Feb  5 14:33:03.424: INFO: (19) /api/v1/namespaces/proxy-5407/services/proxy-service-498t4:portname2/proxy/: bar (200; 17.402976ms)
Feb  5 14:33:03.424: INFO: (19) /api/v1/namespaces/proxy-5407/pods/http:proxy-service-498t4-8xqdv:162/proxy/: bar (200; 17.850312ms)
STEP: deleting ReplicationController proxy-service-498t4 in namespace proxy-5407, will wait for the garbage collector to delete the pods
Feb  5 14:33:03.488: INFO: Deleting ReplicationController proxy-service-498t4 took: 9.231742ms
Feb  5 14:33:03.788: INFO: Terminating ReplicationController proxy-service-498t4 pods took: 300.548802ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:33:16.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5407" for this suite.
Feb  5 14:33:22.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:33:22.738: INFO: namespace proxy-5407 deletion completed in 6.119488736s

• [SLOW TEST:36.913 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:33:22.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 14:33:22.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533" in namespace "projected-6507" to be "success or failure"
Feb  5 14:33:22.889: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.969691ms
Feb  5 14:33:24.898: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013569391s
Feb  5 14:33:26.923: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038658399s
Feb  5 14:33:28.933: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048253668s
Feb  5 14:33:30.945: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060720072s
STEP: Saw pod success
Feb  5 14:33:30.945: INFO: Pod "downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533" satisfied condition "success or failure"
Feb  5 14:33:30.951: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533 container client-container: 
STEP: delete the pod
Feb  5 14:33:31.041: INFO: Waiting for pod downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533 to disappear
Feb  5 14:33:31.052: INFO: Pod downwardapi-volume-a33a41ed-04ff-4c3c-b00b-f80693444533 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:33:31.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6507" for this suite.
Feb  5 14:33:37.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:33:37.176: INFO: namespace projected-6507 deletion completed in 6.116899711s

• [SLOW TEST:14.437 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:33:37.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 14:33:45.416: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:33:45.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8851" for this suite.
Feb  5 14:33:51.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:33:51.788: INFO: namespace container-runtime-8851 deletion completed in 6.209805975s

• [SLOW TEST:14.612 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:33:51.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-5109
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-5109
STEP: Deleting pre-stop pod
Feb  5 14:34:13.092: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:34:13.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-5109" for this suite.
Feb  5 14:34:59.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:34:59.270: INFO: namespace prestop-5109 deletion completed in 46.152540443s

• [SLOW TEST:67.479 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:34:59.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-hgw2
STEP: Creating a pod to test atomic-volume-subpath
Feb  5 14:34:59.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-hgw2" in namespace "subpath-9518" to be "success or failure"
Feb  5 14:34:59.422: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Pending", Reason="", readiness=false. Elapsed: 66.767363ms
Feb  5 14:35:01.432: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07619054s
Feb  5 14:35:03.449: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093419738s
Feb  5 14:35:05.462: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106166622s
Feb  5 14:35:07.470: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.113964146s
Feb  5 14:35:09.478: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 10.122134251s
Feb  5 14:35:11.499: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 12.143091046s
Feb  5 14:35:13.511: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 14.155737807s
Feb  5 14:35:15.572: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 16.216660169s
Feb  5 14:35:17.581: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 18.224893123s
Feb  5 14:35:19.590: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 20.233810748s
Feb  5 14:35:21.598: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 22.241839137s
Feb  5 14:35:23.605: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 24.249160877s
Feb  5 14:35:25.616: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 26.260241874s
Feb  5 14:35:27.626: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Running", Reason="", readiness=true. Elapsed: 28.27059281s
Feb  5 14:35:29.635: INFO: Pod "pod-subpath-test-secret-hgw2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.279676256s
STEP: Saw pod success
Feb  5 14:35:29.635: INFO: Pod "pod-subpath-test-secret-hgw2" satisfied condition "success or failure"
Feb  5 14:35:29.640: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-hgw2 container test-container-subpath-secret-hgw2: 
STEP: delete the pod
Feb  5 14:35:29.706: INFO: Waiting for pod pod-subpath-test-secret-hgw2 to disappear
Feb  5 14:35:29.744: INFO: Pod pod-subpath-test-secret-hgw2 no longer exists
STEP: Deleting pod pod-subpath-test-secret-hgw2
Feb  5 14:35:29.745: INFO: Deleting pod "pod-subpath-test-secret-hgw2" in namespace "subpath-9518"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:35:29.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9518" for this suite.
Feb  5 14:35:35.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:35:35.944: INFO: namespace subpath-9518 deletion completed in 6.18967028s

• [SLOW TEST:36.674 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:35:35.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:35:44.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8418" for this suite.
Feb  5 14:35:50.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:35:50.382: INFO: namespace emptydir-wrapper-8418 deletion completed in 6.175603805s

• [SLOW TEST:14.438 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:35:50.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 14:35:58.579: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:35:58.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8607" for this suite.
Feb  5 14:36:04.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:36:04.839: INFO: namespace container-runtime-8607 deletion completed in 6.185152754s

• [SLOW TEST:14.457 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:36:04.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4146
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  5 14:36:04.917: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  5 14:36:39.200: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-4146 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 14:36:39.200: INFO: >>> kubeConfig: /root/.kube/config
I0205 14:36:39.293356       8 log.go:172] (0xc0005b18c0) (0xc0020c8be0) Create stream
I0205 14:36:39.293422       8 log.go:172] (0xc0005b18c0) (0xc0020c8be0) Stream added, broadcasting: 1
I0205 14:36:39.301112       8 log.go:172] (0xc0005b18c0) Reply frame received for 1
I0205 14:36:39.301141       8 log.go:172] (0xc0005b18c0) (0xc0002f4140) Create stream
I0205 14:36:39.301149       8 log.go:172] (0xc0005b18c0) (0xc0002f4140) Stream added, broadcasting: 3
I0205 14:36:39.302691       8 log.go:172] (0xc0005b18c0) Reply frame received for 3
I0205 14:36:39.302724       8 log.go:172] (0xc0005b18c0) (0xc0020c8dc0) Create stream
I0205 14:36:39.302731       8 log.go:172] (0xc0005b18c0) (0xc0020c8dc0) Stream added, broadcasting: 5
I0205 14:36:39.304564       8 log.go:172] (0xc0005b18c0) Reply frame received for 5
I0205 14:36:39.531945       8 log.go:172] (0xc0005b18c0) Data frame received for 3
I0205 14:36:39.532053       8 log.go:172] (0xc0002f4140) (3) Data frame handling
I0205 14:36:39.532078       8 log.go:172] (0xc0002f4140) (3) Data frame sent
I0205 14:36:39.685344       8 log.go:172] (0xc0005b18c0) (0xc0002f4140) Stream removed, broadcasting: 3
I0205 14:36:39.685539       8 log.go:172] (0xc0005b18c0) Data frame received for 1
I0205 14:36:39.685567       8 log.go:172] (0xc0020c8be0) (1) Data frame handling
I0205 14:36:39.685582       8 log.go:172] (0xc0020c8be0) (1) Data frame sent
I0205 14:36:39.685611       8 log.go:172] (0xc0005b18c0) (0xc0020c8dc0) Stream removed, broadcasting: 5
I0205 14:36:39.685674       8 log.go:172] (0xc0005b18c0) (0xc0020c8be0) Stream removed, broadcasting: 1
I0205 14:36:39.685802       8 log.go:172] (0xc0005b18c0) Go away received
I0205 14:36:39.686080       8 log.go:172] (0xc0005b18c0) (0xc0020c8be0) Stream removed, broadcasting: 1
I0205 14:36:39.686139       8 log.go:172] (0xc0005b18c0) (0xc0002f4140) Stream removed, broadcasting: 3
I0205 14:36:39.686160       8 log.go:172] (0xc0005b18c0) (0xc0020c8dc0) Stream removed, broadcasting: 5
Feb  5 14:36:39.686: INFO: Waiting for endpoints: map[]
Feb  5 14:36:39.694: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-4146 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 14:36:39.695: INFO: >>> kubeConfig: /root/.kube/config
I0205 14:36:39.764097       8 log.go:172] (0xc000eda210) (0xc0018403c0) Create stream
I0205 14:36:39.764146       8 log.go:172] (0xc000eda210) (0xc0018403c0) Stream added, broadcasting: 1
I0205 14:36:39.772905       8 log.go:172] (0xc000eda210) Reply frame received for 1
I0205 14:36:39.772955       8 log.go:172] (0xc000eda210) (0xc0001c8b40) Create stream
I0205 14:36:39.772969       8 log.go:172] (0xc000eda210) (0xc0001c8b40) Stream added, broadcasting: 3
I0205 14:36:39.774912       8 log.go:172] (0xc000eda210) Reply frame received for 3
I0205 14:36:39.774948       8 log.go:172] (0xc000eda210) (0xc0001c8c80) Create stream
I0205 14:36:39.774963       8 log.go:172] (0xc000eda210) (0xc0001c8c80) Stream added, broadcasting: 5
I0205 14:36:39.777010       8 log.go:172] (0xc000eda210) Reply frame received for 5
I0205 14:36:39.896214       8 log.go:172] (0xc000eda210) Data frame received for 3
I0205 14:36:39.896294       8 log.go:172] (0xc0001c8b40) (3) Data frame handling
I0205 14:36:39.896310       8 log.go:172] (0xc0001c8b40) (3) Data frame sent
I0205 14:36:40.048027       8 log.go:172] (0xc000eda210) Data frame received for 1
I0205 14:36:40.048125       8 log.go:172] (0xc0018403c0) (1) Data frame handling
I0205 14:36:40.048209       8 log.go:172] (0xc0018403c0) (1) Data frame sent
I0205 14:36:40.049732       8 log.go:172] (0xc000eda210) (0xc0001c8c80) Stream removed, broadcasting: 5
I0205 14:36:40.049859       8 log.go:172] (0xc000eda210) (0xc0001c8b40) Stream removed, broadcasting: 3
I0205 14:36:40.049944       8 log.go:172] (0xc000eda210) (0xc0018403c0) Stream removed, broadcasting: 1
I0205 14:36:40.049970       8 log.go:172] (0xc000eda210) Go away received
I0205 14:36:40.050645       8 log.go:172] (0xc000eda210) (0xc0018403c0) Stream removed, broadcasting: 1
I0205 14:36:40.050687       8 log.go:172] (0xc000eda210) (0xc0001c8b40) Stream removed, broadcasting: 3
I0205 14:36:40.050695       8 log.go:172] (0xc000eda210) (0xc0001c8c80) Stream removed, broadcasting: 5
Feb  5 14:36:40.050: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:36:40.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4146" for this suite.
Feb  5 14:37:04.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:37:04.180: INFO: namespace pod-network-test-4146 deletion completed in 24.120284186s

• [SLOW TEST:59.341 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:37:04.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  5 14:37:20.412: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:20.421: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:22.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:22.436: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:24.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:24.429: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:26.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:26.431: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:28.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:28.430: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:30.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:30.429: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:32.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:32.433: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:34.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:34.428: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:36.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:36.432: INFO: Pod pod-with-poststart-http-hook still exists
Feb  5 14:37:38.422: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  5 14:37:38.432: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:37:38.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1952" for this suite.
Feb  5 14:38:02.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:38:02.578: INFO: namespace container-lifecycle-hook-1952 deletion completed in 24.137956512s

• [SLOW TEST:58.397 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:38:02.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:38:12.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-312" for this suite.
Feb  5 14:38:54.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:38:54.982: INFO: namespace kubelet-test-312 deletion completed in 42.27736978s

• [SLOW TEST:52.403 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:38:54.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:38:55.095: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  5 14:38:55.125: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  5 14:39:00.136: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  5 14:39:02.155: INFO: Creating deployment "test-rolling-update-deployment"
Feb  5 14:39:02.165: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  5 14:39:02.207: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  5 14:39:04.223: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  5 14:39:04.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:39:06.235: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:39:08.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:39:10.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510350, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716510342, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 14:39:12.239: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  5 14:39:12.277: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6048,SelfLink:/apis/apps/v1/namespaces/deployment-6048/deployments/test-rolling-update-deployment,UID:a651fddf-f853-4fa6-85a9-3b393cb8b877,ResourceVersion:23205831,Generation:1,CreationTimestamp:2020-02-05 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-05 14:39:02 +0000 UTC 2020-02-05 14:39:02 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-05 14:39:10 +0000 UTC 2020-02-05 14:39:02 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  5 14:39:12.281: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-6048,SelfLink:/apis/apps/v1/namespaces/deployment-6048/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:677edc98-2633-40f3-a99c-e711cc046492,ResourceVersion:23205821,Generation:1,CreationTimestamp:2020-02-05 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a651fddf-f853-4fa6-85a9-3b393cb8b877 0xc003239637 0xc003239638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  5 14:39:12.281: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  5 14:39:12.282: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6048,SelfLink:/apis/apps/v1/namespaces/deployment-6048/replicasets/test-rolling-update-controller,UID:8fdd8796-3b46-4817-9638-c10e0439d9f2,ResourceVersion:23205830,Generation:2,CreationTimestamp:2020-02-05 14:38:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment a651fddf-f853-4fa6-85a9-3b393cb8b877 0xc003239567 0xc003239568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  5 14:39:12.367: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-22kbt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-22kbt,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-6048,SelfLink:/api/v1/namespaces/deployment-6048/pods/test-rolling-update-deployment-79f6b9d75c-22kbt,UID:749747bb-adbf-4c8f-a470-561bb9a8d9f9,ResourceVersion:23205820,Generation:0,CreationTimestamp:2020-02-05 14:39:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 677edc98-2633-40f3-a99c-e711cc046492 0xc0015c74f7 0xc0015c74f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-jxt8z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jxt8z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-jxt8z true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0015c7570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0015c7590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:39:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:39:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:39:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 14:39:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-05 14:39:02 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-05 14:39:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1f6b4b5da2225abc1f2b4a8f5c772b7fd0bba35fa35f28e8205d6bfa8af1b2ef}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:39:12.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6048" for this suite.
Feb  5 14:39:18.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:39:18.683: INFO: namespace deployment-6048 deletion completed in 6.295976202s

• [SLOW TEST:23.700 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:39:18.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:39:18.855: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:39:20.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-979" for this suite.
Feb  5 14:39:26.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:39:26.232: INFO: namespace custom-resource-definition-979 deletion completed in 6.161301546s

• [SLOW TEST:7.549 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:39:26.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53
Feb  5 14:39:26.330: INFO: Pod name my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53: Found 0 pods out of 1
Feb  5 14:39:31.342: INFO: Pod name my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53: Found 1 pods out of 1
Feb  5 14:39:31.342: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53" are running
Feb  5 14:39:35.365: INFO: Pod "my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53-c424z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:39:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:39:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:39:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 14:39:26 +0000 UTC Reason: Message:}])
Feb  5 14:39:35.365: INFO: Trying to dial the pod
Feb  5 14:39:40.418: INFO: Controller my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53: Got expected result from replica 1 [my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53-c424z]: "my-hostname-basic-ae8dbf38-07a3-4e85-8913-c6b723357e53-c424z", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:39:40.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5754" for this suite.
Feb  5 14:39:46.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:39:46.556: INFO: namespace replication-controller-5754 deletion completed in 6.130633464s

• [SLOW TEST:20.323 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:39:46.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:39:57.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4775" for this suite.
Feb  5 14:40:19.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:40:20.017: INFO: namespace replication-controller-4775 deletion completed in 22.240764168s

• [SLOW TEST:33.461 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:40:20.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  5 14:40:30.204: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  5 14:40:35.410: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:40:35.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7367" for this suite.
Feb  5 14:40:41.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:40:41.586: INFO: namespace pods-7367 deletion completed in 6.163964164s

• [SLOW TEST:21.567 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:40:41.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 14:40:41.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4051'
Feb  5 14:40:41.883: INFO: stderr: ""
Feb  5 14:40:41.884: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  5 14:40:51.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4051 -o json'
Feb  5 14:40:52.139: INFO: stderr: ""
Feb  5 14:40:52.139: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-05T14:40:41Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4051\",\n        \"resourceVersion\": \"23206109\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4051/pods/e2e-test-nginx-pod\",\n        \"uid\": \"04a9e420-424d-4eca-bffd-a4c66dc18a36\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-rvk2w\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-rvk2w\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-rvk2w\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T14:40:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T14:40:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T14:40:49Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T14:40:41Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://f37156f19561f2d66b3219858a2798dc304897802663918ffbd384be39a430dd\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-05T14:40:48Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-05T14:40:41Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  5 14:40:52.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4051'
Feb  5 14:40:52.734: INFO: stderr: ""
Feb  5 14:40:52.734: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  5 14:40:52.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4051'
Feb  5 14:40:59.495: INFO: stderr: ""
Feb  5 14:40:59.495: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:40:59.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4051" for this suite.
Feb  5 14:41:05.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:41:05.705: INFO: namespace kubectl-4051 deletion completed in 6.19478337s

• [SLOW TEST:24.119 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:41:05.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-587115f9-9e52-4652-835e-d3e9b20d9cc3 in namespace container-probe-3326
Feb  5 14:41:15.858: INFO: Started pod test-webserver-587115f9-9e52-4652-835e-d3e9b20d9cc3 in namespace container-probe-3326
STEP: checking the pod's current state and verifying that restartCount is present
Feb  5 14:41:15.866: INFO: Initial restart count of pod test-webserver-587115f9-9e52-4652-835e-d3e9b20d9cc3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:45:17.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3326" for this suite.
Feb  5 14:45:23.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:45:23.773: INFO: namespace container-probe-3326 deletion completed in 6.412125928s

• [SLOW TEST:258.067 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:45:23.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 14:45:23.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4170'
Feb  5 14:45:25.789: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  5 14:45:25.790: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  5 14:45:25.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4170'
Feb  5 14:45:26.047: INFO: stderr: ""
Feb  5 14:45:26.047: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:45:26.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4170" for this suite.
Feb  5 14:45:32.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:45:32.198: INFO: namespace kubectl-4170 deletion completed in 6.147474033s

• [SLOW TEST:8.425 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:45:32.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7696d8d7-f054-4b3a-993e-ea0f04110839
STEP: Creating a pod to test consume secrets
Feb  5 14:45:32.453: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b" in namespace "projected-5612" to be "success or failure"
Feb  5 14:45:32.456: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206352ms
Feb  5 14:45:34.466: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012725785s
Feb  5 14:45:36.477: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024046024s
Feb  5 14:45:38.490: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036866895s
Feb  5 14:45:40.508: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Running", Reason="", readiness=true. Elapsed: 8.055334536s
Feb  5 14:45:42.522: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069286465s
STEP: Saw pod success
Feb  5 14:45:42.523: INFO: Pod "pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b" satisfied condition "success or failure"
Feb  5 14:45:42.527: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 14:45:42.595: INFO: Waiting for pod pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b to disappear
Feb  5 14:45:42.604: INFO: Pod pod-projected-secrets-29219c92-5140-421b-b23a-3eb66f9de87b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:45:42.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5612" for this suite.
Feb  5 14:45:49.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:45:49.139: INFO: namespace projected-5612 deletion completed in 6.527787261s

• [SLOW TEST:16.940 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:45:49.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-de72fb4c-6577-4528-b9c9-376985530696
STEP: Creating a pod to test consume secrets
Feb  5 14:45:49.374: INFO: Waiting up to 5m0s for pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37" in namespace "secrets-1683" to be "success or failure"
Feb  5 14:45:49.434: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Pending", Reason="", readiness=false. Elapsed: 58.956876ms
Feb  5 14:45:51.445: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069896059s
Feb  5 14:45:53.456: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081732765s
Feb  5 14:45:55.470: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095672958s
Feb  5 14:45:57.477: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101887578s
Feb  5 14:45:59.485: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.110280968s
STEP: Saw pod success
Feb  5 14:45:59.485: INFO: Pod "pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37" satisfied condition "success or failure"
Feb  5 14:45:59.488: INFO: Trying to get logs from node iruya-node pod pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37 container secret-volume-test: 
STEP: delete the pod
Feb  5 14:45:59.652: INFO: Waiting for pod pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37 to disappear
Feb  5 14:45:59.706: INFO: Pod pod-secrets-e16febf6-8441-4efd-8144-62f55c53fe37 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:45:59.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1683" for this suite.
Feb  5 14:46:05.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:46:05.992: INFO: namespace secrets-1683 deletion completed in 6.278314366s

• [SLOW TEST:16.852 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:46:05.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-a12423ec-ea00-426e-b786-baed73decc56
STEP: Creating a pod to test consume secrets
Feb  5 14:46:06.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00" in namespace "projected-111" to be "success or failure"
Feb  5 14:46:06.127: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342197ms
Feb  5 14:46:08.135: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012318327s
Feb  5 14:46:10.141: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018954101s
Feb  5 14:46:12.150: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027492408s
Feb  5 14:46:14.160: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.03758078s
STEP: Saw pod success
Feb  5 14:46:14.160: INFO: Pod "pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00" satisfied condition "success or failure"
Feb  5 14:46:14.167: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00 container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 14:46:14.235: INFO: Waiting for pod pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00 to disappear
Feb  5 14:46:14.240: INFO: Pod pod-projected-secrets-a42ed399-035b-4c46-9004-172ebff42a00 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:46:14.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-111" for this suite.
Feb  5 14:46:20.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:46:20.447: INFO: namespace projected-111 deletion completed in 6.201995121s

• [SLOW TEST:14.454 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:46:20.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  5 14:46:29.301: INFO: Successfully updated pod "annotationupdate63dbec79-b2aa-4f06-adb6-d64d344ac4bc"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:46:31.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5772" for this suite.
Feb  5 14:46:53.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:46:53.590: INFO: namespace projected-5772 deletion completed in 22.148575802s

• [SLOW TEST:33.143 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:46:53.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-966d263d-e752-4b70-a8ce-a54ce27879cd
STEP: Creating a pod to test consume configMaps
Feb  5 14:46:53.716: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6" in namespace "projected-7007" to be "success or failure"
Feb  5 14:46:53.721: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.111969ms
Feb  5 14:46:55.727: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011072843s
Feb  5 14:46:57.735: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018912056s
Feb  5 14:46:59.744: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027811059s
Feb  5 14:47:01.752: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036203393s
Feb  5 14:47:03.760: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04337084s
STEP: Saw pod success
Feb  5 14:47:03.760: INFO: Pod "pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6" satisfied condition "success or failure"
Feb  5 14:47:03.768: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 14:47:03.835: INFO: Waiting for pod pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6 to disappear
Feb  5 14:47:03.866: INFO: Pod pod-projected-configmaps-2a642ba0-f8c1-42e3-a304-5f83b7af0bc6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:47:03.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7007" for this suite.
Feb  5 14:47:10.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:47:10.142: INFO: namespace projected-7007 deletion completed in 6.267311418s

• [SLOW TEST:16.552 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:47:10.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  5 14:47:24.334: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-ee351f21-72c1-45f0-a39f-38d35f627448 contains '' instead of 'foo.example.com.'
Feb  5 14:47:24.340: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-ee351f21-72c1-45f0-a39f-38d35f627448 contains '' instead of 'foo.example.com.'
Feb  5 14:47:24.341: INFO: Lookups using dns-7285/dns-test-ee351f21-72c1-45f0-a39f-38d35f627448 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:47:29.363: INFO: DNS probes using dns-test-ee351f21-72c1-45f0-a39f-38d35f627448 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  5 14:47:43.530: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains '' instead of 'bar.example.com.'
Feb  5 14:47:43.534: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains '' instead of 'bar.example.com.'
Feb  5 14:47:43.534: INFO: Lookups using dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:47:48.560: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  5 14:47:48.572: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  5 14:47:48.572: INFO: Lookups using dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:47:53.548: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  5 14:47:53.555: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  5 14:47:53.555: INFO: Lookups using dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:47:58.553: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains '' instead of 'bar.example.com.'
Feb  5 14:47:58.588: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 contains '' instead of 'bar.example.com.'
Feb  5 14:47:58.588: INFO: Lookups using dns-7285/dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:48:03.565: INFO: DNS probes using dns-test-f46bdb8f-4c3f-46fb-b046-468b372d4ef7 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7285.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  5 14:48:18.302: INFO: File wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-efadf415-0a15-474d-80b9-0ef911b57d77 contains '' instead of '10.100.250.150'
Feb  5 14:48:18.331: INFO: File jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local from pod  dns-7285/dns-test-efadf415-0a15-474d-80b9-0ef911b57d77 contains '' instead of '10.100.250.150'
Feb  5 14:48:18.331: INFO: Lookups using dns-7285/dns-test-efadf415-0a15-474d-80b9-0ef911b57d77 failed for: [wheezy_udp@dns-test-service-3.dns-7285.svc.cluster.local jessie_udp@dns-test-service-3.dns-7285.svc.cluster.local]

Feb  5 14:48:23.377: INFO: DNS probes using dns-test-efadf415-0a15-474d-80b9-0ef911b57d77 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:48:23.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7285" for this suite.
Feb  5 14:48:31.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:48:31.740: INFO: namespace dns-7285 deletion completed in 8.19276578s

• [SLOW TEST:81.598 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:48:31.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 14:48:41.022: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:48:41.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-898" for this suite.
Feb  5 14:48:47.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:48:47.244: INFO: namespace container-runtime-898 deletion completed in 6.146648201s

• [SLOW TEST:15.503 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:48:47.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:48:47.441: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ed14e4ec-c886-493e-8eec-83f641b29276", Controller:(*bool)(0xc0021e0122), BlockOwnerDeletion:(*bool)(0xc0021e0123)}}
Feb  5 14:48:47.573: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"d8a01388-12f6-47f8-b1e0-b9a5a1932312", Controller:(*bool)(0xc0031e6a9a), BlockOwnerDeletion:(*bool)(0xc0031e6a9b)}}
Feb  5 14:48:47.617: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d5fa05c1-7606-459d-8d0e-5620deec54f9", Controller:(*bool)(0xc0021e02f2), BlockOwnerDeletion:(*bool)(0xc0021e02f3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:48:52.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8322" for this suite.
Feb  5 14:48:59.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:48:59.431: INFO: namespace gc-8322 deletion completed in 6.725535769s

• [SLOW TEST:12.187 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:48:59.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-db9b2b00-6833-4ab8-b571-a826ab8bf6da
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-db9b2b00-6833-4ab8-b571-a826ab8bf6da
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:50:27.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9369" for this suite.
Feb  5 14:50:49.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:50:49.770: INFO: namespace projected-9369 deletion completed in 22.203886893s

• [SLOW TEST:110.339 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:50:49.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-164b3a19-af57-4080-8a14-be10fc700eb6
STEP: Creating a pod to test consume secrets
Feb  5 14:50:49.932: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca" in namespace "projected-5691" to be "success or failure"
Feb  5 14:50:49.957: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Pending", Reason="", readiness=false. Elapsed: 24.625154ms
Feb  5 14:50:51.968: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03555346s
Feb  5 14:50:54.031: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099107988s
Feb  5 14:50:56.039: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106895248s
Feb  5 14:50:58.047: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114349296s
Feb  5 14:51:00.056: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123214435s
STEP: Saw pod success
Feb  5 14:51:00.056: INFO: Pod "pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca" satisfied condition "success or failure"
Feb  5 14:51:00.061: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 14:51:00.131: INFO: Waiting for pod pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca to disappear
Feb  5 14:51:00.135: INFO: Pod pod-projected-secrets-34a91e72-6786-4f4a-b90d-23dd025646ca no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:51:00.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5691" for this suite.
Feb  5 14:51:06.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:51:06.321: INFO: namespace projected-5691 deletion completed in 6.178931261s

• [SLOW TEST:16.550 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:51:06.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  5 14:51:14.454: INFO: Pod pod-hostip-9f359217-c928-496a-9ce3-b29e0c618bfc has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:51:14.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8884" for this suite.
Feb  5 14:51:36.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:51:36.618: INFO: namespace pods-8884 deletion completed in 22.155921118s

• [SLOW TEST:30.297 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:51:36.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6d673f7d-6470-4713-b943-4bc340d02bb4
STEP: Creating a pod to test consume configMaps
Feb  5 14:51:36.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae" in namespace "configmap-5850" to be "success or failure"
Feb  5 14:51:36.723: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662623ms
Feb  5 14:51:38.733: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018393211s
Feb  5 14:51:40.803: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089084239s
Feb  5 14:51:42.812: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097727165s
Feb  5 14:51:44.849: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134490077s
STEP: Saw pod success
Feb  5 14:51:44.849: INFO: Pod "pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae" satisfied condition "success or failure"
Feb  5 14:51:44.854: INFO: Trying to get logs from node iruya-node pod pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae container configmap-volume-test: 
STEP: delete the pod
Feb  5 14:51:44.943: INFO: Waiting for pod pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae to disappear
Feb  5 14:51:44.959: INFO: Pod pod-configmaps-40c5f252-062a-4cc0-a626-b662256dfaae no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:51:44.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5850" for this suite.
Feb  5 14:51:52.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:51:53.109: INFO: namespace configmap-5850 deletion completed in 8.141956061s

• [SLOW TEST:16.488 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:51:53.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  5 14:51:53.976: INFO: Pod name wrapped-volume-race-17a96ace-3d73-40bb-9a31-76e57ba27a4f: Found 0 pods out of 5
Feb  5 14:51:58.997: INFO: Pod name wrapped-volume-race-17a96ace-3d73-40bb-9a31-76e57ba27a4f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-17a96ace-3d73-40bb-9a31-76e57ba27a4f in namespace emptydir-wrapper-9685, will wait for the garbage collector to delete the pods
Feb  5 14:52:27.117: INFO: Deleting ReplicationController wrapped-volume-race-17a96ace-3d73-40bb-9a31-76e57ba27a4f took: 14.841327ms
Feb  5 14:52:27.518: INFO: Terminating ReplicationController wrapped-volume-race-17a96ace-3d73-40bb-9a31-76e57ba27a4f pods took: 400.564179ms
STEP: Creating RC which spawns configmap-volume pods
Feb  5 14:53:17.811: INFO: Pod name wrapped-volume-race-e5a34b3c-60b5-4b8a-8969-2683d32d94fb: Found 0 pods out of 5
Feb  5 14:53:22.881: INFO: Pod name wrapped-volume-race-e5a34b3c-60b5-4b8a-8969-2683d32d94fb: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e5a34b3c-60b5-4b8a-8969-2683d32d94fb in namespace emptydir-wrapper-9685, will wait for the garbage collector to delete the pods
Feb  5 14:53:55.043: INFO: Deleting ReplicationController wrapped-volume-race-e5a34b3c-60b5-4b8a-8969-2683d32d94fb took: 14.922483ms
Feb  5 14:53:55.344: INFO: Terminating ReplicationController wrapped-volume-race-e5a34b3c-60b5-4b8a-8969-2683d32d94fb pods took: 300.474618ms
STEP: Creating RC which spawns configmap-volume pods
Feb  5 14:54:42.423: INFO: Pod name wrapped-volume-race-d03308a2-85da-43b0-a6c3-a7b46094305f: Found 0 pods out of 5
Feb  5 14:54:47.436: INFO: Pod name wrapped-volume-race-d03308a2-85da-43b0-a6c3-a7b46094305f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d03308a2-85da-43b0-a6c3-a7b46094305f in namespace emptydir-wrapper-9685, will wait for the garbage collector to delete the pods
Feb  5 14:55:19.966: INFO: Deleting ReplicationController wrapped-volume-race-d03308a2-85da-43b0-a6c3-a7b46094305f took: 12.51972ms
Feb  5 14:55:20.467: INFO: Terminating ReplicationController wrapped-volume-race-d03308a2-85da-43b0-a6c3-a7b46094305f pods took: 500.589548ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:56:08.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9685" for this suite.
Feb  5 14:56:16.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:56:16.506: INFO: namespace emptydir-wrapper-9685 deletion completed in 8.184060256s

• [SLOW TEST:263.395 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:56:16.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 14:56:16.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3740'
Feb  5 14:56:18.949: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  5 14:56:18.949: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  5 14:56:19.034: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-sx2lf]
Feb  5 14:56:19.034: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-sx2lf" in namespace "kubectl-3740" to be "running and ready"
Feb  5 14:56:19.083: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Pending", Reason="", readiness=false. Elapsed: 49.122977ms
Feb  5 14:56:21.091: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056911249s
Feb  5 14:56:23.101: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067184216s
Feb  5 14:56:25.113: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079682098s
Feb  5 14:56:27.121: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087464454s
Feb  5 14:56:29.135: INFO: Pod "e2e-test-nginx-rc-sx2lf": Phase="Running", Reason="", readiness=true. Elapsed: 10.100891873s
Feb  5 14:56:29.135: INFO: Pod "e2e-test-nginx-rc-sx2lf" satisfied condition "running and ready"
Feb  5 14:56:29.135: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-sx2lf]
Feb  5 14:56:29.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3740'
Feb  5 14:56:29.396: INFO: stderr: ""
Feb  5 14:56:29.396: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  5 14:56:29.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3740'
Feb  5 14:56:29.574: INFO: stderr: ""
Feb  5 14:56:29.574: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:56:29.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3740" for this suite.
Feb  5 14:56:51.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:56:51.765: INFO: namespace kubectl-3740 deletion completed in 22.151555796s

• [SLOW TEST:35.258 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:56:51.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6706
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6706
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6706
Feb  5 14:56:52.076: INFO: Found 0 stateful pods, waiting for 1
Feb  5 14:57:02.093: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  5 14:57:02.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 14:57:02.875: INFO: stderr: "I0205 14:57:02.456916    3199 log.go:172] (0xc000acc630) (0xc000628d20) Create stream\nI0205 14:57:02.457265    3199 log.go:172] (0xc000acc630) (0xc000628d20) Stream added, broadcasting: 1\nI0205 14:57:02.466130    3199 log.go:172] (0xc000acc630) Reply frame received for 1\nI0205 14:57:02.466287    3199 log.go:172] (0xc000acc630) (0xc000628dc0) Create stream\nI0205 14:57:02.466314    3199 log.go:172] (0xc000acc630) (0xc000628dc0) Stream added, broadcasting: 3\nI0205 14:57:02.467672    3199 log.go:172] (0xc000acc630) Reply frame received for 3\nI0205 14:57:02.467739    3199 log.go:172] (0xc000acc630) (0xc0009e4000) Create stream\nI0205 14:57:02.467764    3199 log.go:172] (0xc000acc630) (0xc0009e4000) Stream added, broadcasting: 5\nI0205 14:57:02.480485    3199 log.go:172] (0xc000acc630) Reply frame received for 5\nI0205 14:57:02.660998    3199 log.go:172] (0xc000acc630) Data frame received for 5\nI0205 14:57:02.661077    3199 log.go:172] (0xc0009e4000) (5) Data frame handling\nI0205 14:57:02.661095    3199 log.go:172] (0xc0009e4000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 14:57:02.722892    3199 log.go:172] (0xc000acc630) Data frame received for 3\nI0205 14:57:02.722945    3199 log.go:172] (0xc000628dc0) (3) Data frame handling\nI0205 14:57:02.722974    3199 log.go:172] (0xc000628dc0) (3) Data frame sent\nI0205 14:57:02.859143    3199 log.go:172] (0xc000acc630) (0xc000628dc0) Stream removed, broadcasting: 3\nI0205 14:57:02.860382    3199 log.go:172] (0xc000acc630) Data frame received for 1\nI0205 14:57:02.860751    3199 log.go:172] (0xc000acc630) (0xc0009e4000) Stream removed, broadcasting: 5\nI0205 14:57:02.860901    3199 log.go:172] (0xc000628d20) (1) Data frame handling\nI0205 14:57:02.860974    3199 log.go:172] (0xc000628d20) (1) Data frame sent\nI0205 14:57:02.861030    3199 log.go:172] (0xc000acc630) (0xc000628d20) Stream removed, broadcasting: 1\nI0205 14:57:02.861086    3199 log.go:172] (0xc000acc630) Go away received\nI0205 14:57:02.862281    3199 log.go:172] (0xc000acc630) (0xc000628d20) Stream removed, broadcasting: 1\nI0205 14:57:02.862387    3199 log.go:172] (0xc000acc630) (0xc000628dc0) Stream removed, broadcasting: 3\nI0205 14:57:02.862452    3199 log.go:172] (0xc000acc630) (0xc0009e4000) Stream removed, broadcasting: 5\n"
Feb  5 14:57:02.875: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 14:57:02.876: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 14:57:02.887: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  5 14:57:12.895: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 14:57:12.895: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:57:12.912: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999742s
Feb  5 14:57:13.928: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991953304s
Feb  5 14:57:14.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.975996054s
Feb  5 14:57:15.950: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.967254433s
Feb  5 14:57:16.962: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954636613s
Feb  5 14:57:17.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.94264423s
Feb  5 14:57:18.986: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.932009479s
Feb  5 14:57:19.994: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.918800825s
Feb  5 14:57:21.004: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.910375782s
Feb  5 14:57:22.016: INFO: Verifying statefulset ss doesn't scale past 1 for another 900.631131ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6706
Feb  5 14:57:23.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:57:23.507: INFO: stderr: "I0205 14:57:23.236523    3219 log.go:172] (0xc00092a420) (0xc0008f26e0) Create stream\nI0205 14:57:23.236699    3219 log.go:172] (0xc00092a420) (0xc0008f26e0) Stream added, broadcasting: 1\nI0205 14:57:23.246051    3219 log.go:172] (0xc00092a420) Reply frame received for 1\nI0205 14:57:23.246084    3219 log.go:172] (0xc00092a420) (0xc00055c280) Create stream\nI0205 14:57:23.246107    3219 log.go:172] (0xc00092a420) (0xc00055c280) Stream added, broadcasting: 3\nI0205 14:57:23.247698    3219 log.go:172] (0xc00092a420) Reply frame received for 3\nI0205 14:57:23.247745    3219 log.go:172] (0xc00092a420) (0xc00055c320) Create stream\nI0205 14:57:23.247764    3219 log.go:172] (0xc00092a420) (0xc00055c320) Stream added, broadcasting: 5\nI0205 14:57:23.249277    3219 log.go:172] (0xc00092a420) Reply frame received for 5\nI0205 14:57:23.348308    3219 log.go:172] (0xc00092a420) Data frame received for 3\nI0205 14:57:23.348504    3219 log.go:172] (0xc00055c280) (3) Data frame handling\nI0205 14:57:23.348581    3219 log.go:172] (0xc00055c280) (3) Data frame sent\nI0205 14:57:23.348722    3219 log.go:172] (0xc00092a420) Data frame received for 5\nI0205 14:57:23.348791    3219 log.go:172] (0xc00055c320) (5) Data frame handling\nI0205 14:57:23.348828    3219 log.go:172] (0xc00055c320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 14:57:23.495111    3219 log.go:172] (0xc00092a420) (0xc00055c280) Stream removed, broadcasting: 3\nI0205 14:57:23.495348    3219 log.go:172] (0xc00092a420) Data frame received for 1\nI0205 14:57:23.495440    3219 log.go:172] (0xc0008f26e0) (1) Data frame handling\nI0205 14:57:23.495479    3219 log.go:172] (0xc0008f26e0) (1) Data frame sent\nI0205 14:57:23.495589    3219 log.go:172] (0xc00092a420) (0xc00055c320) Stream removed, broadcasting: 5\nI0205 14:57:23.495623    3219 log.go:172] (0xc00092a420) (0xc0008f26e0) Stream removed, broadcasting: 1\nI0205 14:57:23.495636    3219 log.go:172] (0xc00092a420) Go away received\nI0205 14:57:23.496384    3219 log.go:172] (0xc00092a420) (0xc0008f26e0) Stream removed, broadcasting: 1\nI0205 14:57:23.496402    3219 log.go:172] (0xc00092a420) (0xc00055c280) Stream removed, broadcasting: 3\nI0205 14:57:23.496410    3219 log.go:172] (0xc00092a420) (0xc00055c320) Stream removed, broadcasting: 5\n"
Feb  5 14:57:23.507: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 14:57:23.507: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 14:57:23.514: INFO: Found 1 stateful pods, waiting for 3
Feb  5 14:57:33.522: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:57:33.522: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:57:33.522: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  5 14:57:43.530: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:57:43.530: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 14:57:43.530: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  5 14:57:43.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 14:57:44.542: INFO: stderr: "I0205 14:57:43.814170    3238 log.go:172] (0xc0008ca370) (0xc0007be5a0) Create stream\nI0205 14:57:43.814352    3238 log.go:172] (0xc0008ca370) (0xc0007be5a0) Stream added, broadcasting: 1\nI0205 14:57:43.823620    3238 log.go:172] (0xc0008ca370) Reply frame received for 1\nI0205 14:57:43.823652    3238 log.go:172] (0xc0008ca370) (0xc0005dc280) Create stream\nI0205 14:57:43.823661    3238 log.go:172] (0xc0008ca370) (0xc0005dc280) Stream added, broadcasting: 3\nI0205 14:57:43.829076    3238 log.go:172] (0xc0008ca370) Reply frame received for 3\nI0205 14:57:43.829098    3238 log.go:172] (0xc0008ca370) (0xc0005dc320) Create stream\nI0205 14:57:43.829105    3238 log.go:172] (0xc0008ca370) (0xc0005dc320) Stream added, broadcasting: 5\nI0205 14:57:43.831009    3238 log.go:172] (0xc0008ca370) Reply frame received for 5\nI0205 14:57:44.173493    3238 log.go:172] (0xc0008ca370) Data frame received for 5\nI0205 14:57:44.173645    3238 log.go:172] (0xc0005dc320) (5) Data frame handling\nI0205 14:57:44.173755    3238 log.go:172] (0xc0005dc320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 14:57:44.181588    3238 log.go:172] (0xc0008ca370) Data frame received for 3\nI0205 14:57:44.181621    3238 log.go:172] (0xc0005dc280) (3) Data frame handling\nI0205 14:57:44.181630    3238 log.go:172] (0xc0005dc280) (3) Data frame sent\nI0205 14:57:44.535611    3238 log.go:172] (0xc0008ca370) (0xc0005dc280) Stream removed, broadcasting: 3\nI0205 14:57:44.535916    3238 log.go:172] (0xc0008ca370) Data frame received for 1\nI0205 14:57:44.535932    3238 log.go:172] (0xc0008ca370) (0xc0005dc320) Stream removed, broadcasting: 5\nI0205 14:57:44.535959    3238 log.go:172] (0xc0007be5a0) (1) Data frame handling\nI0205 14:57:44.535973    3238 log.go:172] (0xc0007be5a0) (1) Data frame sent\nI0205 14:57:44.535980    3238 log.go:172] (0xc0008ca370) (0xc0007be5a0) Stream removed, broadcasting: 1\nI0205 14:57:44.535991    3238 log.go:172] (0xc0008ca370) Go away received\nI0205 14:57:44.536299    3238 log.go:172] (0xc0008ca370) (0xc0007be5a0) Stream removed, broadcasting: 1\nI0205 14:57:44.536332    3238 log.go:172] (0xc0008ca370) (0xc0005dc280) Stream removed, broadcasting: 3\nI0205 14:57:44.536341    3238 log.go:172] (0xc0008ca370) (0xc0005dc320) Stream removed, broadcasting: 5\n"
Feb  5 14:57:44.542: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 14:57:44.542: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 14:57:44.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 14:57:45.216: INFO: stderr: "I0205 14:57:44.723254    3255 log.go:172] (0xc00087ebb0) (0xc000838fa0) Create stream\nI0205 14:57:44.723476    3255 log.go:172] (0xc00087ebb0) (0xc000838fa0) Stream added, broadcasting: 1\nI0205 14:57:44.735988    3255 log.go:172] (0xc00087ebb0) Reply frame received for 1\nI0205 14:57:44.736037    3255 log.go:172] (0xc00087ebb0) (0xc0005a40a0) Create stream\nI0205 14:57:44.736043    3255 log.go:172] (0xc00087ebb0) (0xc0005a40a0) Stream added, broadcasting: 3\nI0205 14:57:44.737533    3255 log.go:172] (0xc00087ebb0) Reply frame received for 3\nI0205 14:57:44.737551    3255 log.go:172] (0xc00087ebb0) (0xc0005a4140) Create stream\nI0205 14:57:44.737557    3255 log.go:172] (0xc00087ebb0) (0xc0005a4140) Stream added, broadcasting: 5\nI0205 14:57:44.738404    3255 log.go:172] (0xc00087ebb0) Reply frame received for 5\nI0205 14:57:45.039824    3255 log.go:172] (0xc00087ebb0) Data frame received for 5\nI0205 14:57:45.039855    3255 log.go:172] (0xc0005a4140) (5) Data frame handling\nI0205 14:57:45.039866    3255 log.go:172] (0xc0005a4140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 14:57:45.108367    3255 log.go:172] (0xc00087ebb0) Data frame received for 3\nI0205 14:57:45.108585    3255 log.go:172] (0xc0005a40a0) (3) Data frame handling\nI0205 14:57:45.108617    3255 log.go:172] (0xc0005a40a0) (3) Data frame sent\nI0205 14:57:45.204625    3255 log.go:172] (0xc00087ebb0) Data frame received for 1\nI0205 14:57:45.205026    3255 log.go:172] (0xc00087ebb0) (0xc0005a40a0) Stream removed, broadcasting: 3\nI0205 14:57:45.205129    3255 log.go:172] (0xc000838fa0) (1) Data frame handling\nI0205 14:57:45.205198    3255 log.go:172] (0xc000838fa0) (1) Data frame sent\nI0205 14:57:45.205292    3255 log.go:172] (0xc00087ebb0) (0xc0005a4140) Stream removed, broadcasting: 5\nI0205 14:57:45.205822    3255 log.go:172] (0xc00087ebb0) (0xc000838fa0) Stream removed, broadcasting: 1\nI0205 14:57:45.205920    3255 log.go:172] (0xc00087ebb0) Go away received\nI0205 14:57:45.206503    3255 log.go:172] (0xc00087ebb0) (0xc000838fa0) Stream removed, broadcasting: 1\nI0205 14:57:45.206529    3255 log.go:172] (0xc00087ebb0) (0xc0005a40a0) Stream removed, broadcasting: 3\nI0205 14:57:45.206574    3255 log.go:172] (0xc00087ebb0) (0xc0005a4140) Stream removed, broadcasting: 5\n"
Feb  5 14:57:45.217: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 14:57:45.217: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 14:57:45.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  5 14:57:45.648: INFO: stderr: "I0205 14:57:45.339960    3268 log.go:172] (0xc000582bb0) (0xc0005c4820) Create stream\nI0205 14:57:45.340042    3268 log.go:172] (0xc000582bb0) (0xc0005c4820) Stream added, broadcasting: 1\nI0205 14:57:45.344662    3268 log.go:172] (0xc000582bb0) Reply frame received for 1\nI0205 14:57:45.344682    3268 log.go:172] (0xc000582bb0) (0xc000510000) Create stream\nI0205 14:57:45.344689    3268 log.go:172] (0xc000582bb0) (0xc000510000) Stream added, broadcasting: 3\nI0205 14:57:45.345784    3268 log.go:172] (0xc000582bb0) Reply frame received for 3\nI0205 14:57:45.345801    3268 log.go:172] (0xc000582bb0) (0xc0004e6be0) Create stream\nI0205 14:57:45.345807    3268 log.go:172] (0xc000582bb0) (0xc0004e6be0) Stream added, broadcasting: 5\nI0205 14:57:45.346641    3268 log.go:172] (0xc000582bb0) Reply frame received for 5\nI0205 14:57:45.455164    3268 log.go:172] (0xc000582bb0) Data frame received for 5\nI0205 14:57:45.455221    3268 log.go:172] (0xc0004e6be0) (5) Data frame handling\nI0205 14:57:45.455233    3268 log.go:172] (0xc0004e6be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0205 14:57:45.486800    3268 log.go:172] (0xc000582bb0) Data frame received for 3\nI0205 14:57:45.486817    3268 log.go:172] (0xc000510000) (3) Data frame handling\nI0205 14:57:45.486826    3268 log.go:172] (0xc000510000) (3) Data frame sent\nI0205 14:57:45.641920    3268 log.go:172] (0xc000582bb0) Data frame received for 1\nI0205 14:57:45.642214    3268 log.go:172] (0xc000582bb0) (0xc000510000) Stream removed, broadcasting: 3\nI0205 14:57:45.642260    3268 log.go:172] (0xc0005c4820) (1) Data frame handling\nI0205 14:57:45.642328    3268 log.go:172] (0xc0005c4820) (1) Data frame sent\nI0205 14:57:45.642396    3268 log.go:172] (0xc000582bb0) (0xc0004e6be0) Stream removed, broadcasting: 5\nI0205 14:57:45.642463    3268 log.go:172] (0xc000582bb0) (0xc0005c4820) Stream removed, broadcasting: 1\nI0205 14:57:45.642496    3268 log.go:172] (0xc000582bb0) Go away received\nI0205 14:57:45.643069    3268 log.go:172] (0xc000582bb0) (0xc0005c4820) Stream removed, broadcasting: 1\nI0205 14:57:45.643083    3268 log.go:172] (0xc000582bb0) (0xc000510000) Stream removed, broadcasting: 3\nI0205 14:57:45.643091    3268 log.go:172] (0xc000582bb0) (0xc0004e6be0) Stream removed, broadcasting: 5\n"
Feb  5 14:57:45.648: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  5 14:57:45.648: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  5 14:57:45.648: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:57:45.655: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb  5 14:57:55.673: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 14:57:55.673: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 14:57:55.673: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  5 14:57:55.688: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999644s
Feb  5 14:57:56.698: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994854169s
Feb  5 14:57:57.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98493991s
Feb  5 14:57:58.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.961739172s
Feb  5 14:57:59.749: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.943020531s
Feb  5 14:58:00.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.934233312s
Feb  5 14:58:01.901: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.819008542s
Feb  5 14:58:02.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.781678506s
Feb  5 14:58:03.940: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.770038343s
Feb  5 14:58:04.970: INFO: Verifying statefulset ss doesn't scale past 3 for another 742.639451ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6706
Feb  5 14:58:05.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:58:06.626: INFO: stderr: "I0205 14:58:06.288478    3285 log.go:172] (0xc0008ea370) (0xc00085a5a0) Create stream\nI0205 14:58:06.288624    3285 log.go:172] (0xc0008ea370) (0xc00085a5a0) Stream added, broadcasting: 1\nI0205 14:58:06.294137    3285 log.go:172] (0xc0008ea370) Reply frame received for 1\nI0205 14:58:06.294171    3285 log.go:172] (0xc0008ea370) (0xc00085a640) Create stream\nI0205 14:58:06.294178    3285 log.go:172] (0xc0008ea370) (0xc00085a640) Stream added, broadcasting: 3\nI0205 14:58:06.296237    3285 log.go:172] (0xc0008ea370) Reply frame received for 3\nI0205 14:58:06.296269    3285 log.go:172] (0xc0008ea370) (0xc0005ee140) Create stream\nI0205 14:58:06.296295    3285 log.go:172] (0xc0008ea370) (0xc0005ee140) Stream added, broadcasting: 5\nI0205 14:58:06.297752    3285 log.go:172] (0xc0008ea370) Reply frame received for 5\nI0205 14:58:06.403754    3285 log.go:172] (0xc0008ea370) Data frame received for 3\nI0205 14:58:06.403821    3285 log.go:172] (0xc00085a640) (3) Data frame handling\nI0205 14:58:06.403837    3285 log.go:172] (0xc00085a640) (3) Data frame sent\nI0205 14:58:06.403915    3285 log.go:172] (0xc0008ea370) Data frame received for 5\nI0205 14:58:06.403939    3285 log.go:172] (0xc0005ee140) (5) Data frame handling\nI0205 14:58:06.403959    3285 log.go:172] (0xc0005ee140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 14:58:06.611738    3285 log.go:172] (0xc0008ea370) Data frame received for 1\nI0205 14:58:06.612240    3285 log.go:172] (0xc0008ea370) (0xc00085a640) Stream removed, broadcasting: 3\nI0205 14:58:06.612336    3285 log.go:172] (0xc00085a5a0) (1) Data frame handling\nI0205 14:58:06.612366    3285 log.go:172] (0xc00085a5a0) (1) Data frame sent\nI0205 14:58:06.612451    3285 log.go:172] (0xc0008ea370) (0xc00085a5a0) Stream removed, broadcasting: 1\nI0205 14:58:06.612724    3285 log.go:172] (0xc0008ea370) (0xc0005ee140) Stream removed, broadcasting: 5\nI0205 14:58:06.613269    3285 log.go:172] (0xc0008ea370) Go away received\nI0205 14:58:06.613394    3285 log.go:172] (0xc0008ea370) (0xc00085a5a0) Stream removed, broadcasting: 1\nI0205 14:58:06.613454    3285 log.go:172] (0xc0008ea370) (0xc00085a640) Stream removed, broadcasting: 3\nI0205 14:58:06.613466    3285 log.go:172] (0xc0008ea370) (0xc0005ee140) Stream removed, broadcasting: 5\n"
Feb  5 14:58:06.626: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 14:58:06.626: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 14:58:06.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:58:07.151: INFO: stderr: "I0205 14:58:06.901088    3308 log.go:172] (0xc000812420) (0xc00022a820) Create stream\nI0205 14:58:06.901163    3308 log.go:172] (0xc000812420) (0xc00022a820) Stream added, broadcasting: 1\nI0205 14:58:06.903600    3308 log.go:172] (0xc000812420) Reply frame received for 1\nI0205 14:58:06.903651    3308 log.go:172] (0xc000812420) (0xc000926000) Create stream\nI0205 14:58:06.903685    3308 log.go:172] (0xc000812420) (0xc000926000) Stream added, broadcasting: 3\nI0205 14:58:06.905437    3308 log.go:172] (0xc000812420) Reply frame received for 3\nI0205 14:58:06.905462    3308 log.go:172] (0xc000812420) (0xc0005ce320) Create stream\nI0205 14:58:06.905470    3308 log.go:172] (0xc000812420) (0xc0005ce320) Stream added, broadcasting: 5\nI0205 14:58:06.907139    3308 log.go:172] (0xc000812420) Reply frame received for 5\nI0205 14:58:07.056958    3308 log.go:172] (0xc000812420) Data frame received for 5\nI0205 14:58:07.057108    3308 log.go:172] (0xc0005ce320) (5) Data frame handling\nI0205 14:58:07.057127    3308 log.go:172] (0xc0005ce320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 14:58:07.057142    3308 log.go:172] (0xc000812420) Data frame received for 3\nI0205 14:58:07.057147    3308 log.go:172] (0xc000926000) (3) Data frame handling\nI0205 14:58:07.057159    3308 log.go:172] (0xc000926000) (3) Data frame sent\nI0205 14:58:07.146063    3308 log.go:172] (0xc000812420) Data frame received for 1\nI0205 14:58:07.146107    3308 log.go:172] (0xc000812420) (0xc000926000) Stream removed, broadcasting: 3\nI0205 14:58:07.146135    3308 log.go:172] (0xc00022a820) (1) Data frame handling\nI0205 14:58:07.146143    3308 log.go:172] (0xc00022a820) (1) Data frame sent\nI0205 14:58:07.146182    3308 log.go:172] (0xc000812420) (0xc0005ce320) Stream removed, broadcasting: 5\nI0205 14:58:07.146218    3308 log.go:172] (0xc000812420) (0xc00022a820) Stream removed, broadcasting: 1\nI0205 14:58:07.146240    3308 log.go:172] (0xc000812420) Go away received\nI0205 14:58:07.146713    3308 log.go:172] (0xc000812420) (0xc00022a820) Stream removed, broadcasting: 1\nI0205 14:58:07.146748    3308 log.go:172] (0xc000812420) (0xc000926000) Stream removed, broadcasting: 3\nI0205 14:58:07.146760    3308 log.go:172] (0xc000812420) (0xc0005ce320) Stream removed, broadcasting: 5\n"
Feb  5 14:58:07.152: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 14:58:07.152: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 14:58:07.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6706 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  5 14:58:07.777: INFO: stderr: "I0205 14:58:07.441584    3330 log.go:172] (0xc0009380b0) (0xc0009a45a0) Create stream\nI0205 14:58:07.441695    3330 log.go:172] (0xc0009380b0) (0xc0009a45a0) Stream added, broadcasting: 1\nI0205 14:58:07.450858    3330 log.go:172] (0xc0009380b0) Reply frame received for 1\nI0205 14:58:07.451212    3330 log.go:172] (0xc0009380b0) (0xc00088e140) Create stream\nI0205 14:58:07.451244    3330 log.go:172] (0xc0009380b0) (0xc00088e140) Stream added, broadcasting: 3\nI0205 14:58:07.455269    3330 log.go:172] (0xc0009380b0) Reply frame received for 3\nI0205 14:58:07.455408    3330 log.go:172] (0xc0009380b0) (0xc0009a4640) Create stream\nI0205 14:58:07.455451    3330 log.go:172] (0xc0009380b0) (0xc0009a4640) Stream added, broadcasting: 5\nI0205 14:58:07.458822    3330 log.go:172] (0xc0009380b0) Reply frame received for 5\nI0205 14:58:07.610736    3330 log.go:172] (0xc0009380b0) Data frame received for 3\nI0205 14:58:07.610821    3330 log.go:172] (0xc00088e140) (3) Data frame handling\nI0205 14:58:07.610845    3330 log.go:172] (0xc00088e140) (3) Data frame sent\nI0205 14:58:07.610917    3330 log.go:172] (0xc0009380b0) Data frame received for 5\nI0205 14:58:07.610935    3330 log.go:172] (0xc0009a4640) (5) Data frame handling\nI0205 14:58:07.610950    3330 log.go:172] (0xc0009a4640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0205 14:58:07.761771    3330 log.go:172] (0xc0009380b0) (0xc00088e140) Stream removed, broadcasting: 3\nI0205 14:58:07.762007    3330 log.go:172] (0xc0009380b0) Data frame received for 1\nI0205 14:58:07.762042    3330 log.go:172] (0xc0009a45a0) (1) Data frame handling\nI0205 14:58:07.762074    3330 log.go:172] (0xc0009a45a0) (1) Data frame sent\nI0205 14:58:07.762104    3330 log.go:172] (0xc0009380b0) (0xc0009a45a0) Stream removed, broadcasting: 1\nI0205 14:58:07.762292    3330 log.go:172] (0xc0009380b0) (0xc0009a4640) Stream removed, broadcasting: 5\nI0205 14:58:07.762356    3330 log.go:172] (0xc0009380b0) Go away received\nI0205 14:58:07.763037    3330 log.go:172] (0xc0009380b0) (0xc0009a45a0) Stream removed, broadcasting: 1\nI0205 14:58:07.763051    3330 log.go:172] (0xc0009380b0) (0xc00088e140) Stream removed, broadcasting: 3\nI0205 14:58:07.763061    3330 log.go:172] (0xc0009380b0) (0xc0009a4640) Stream removed, broadcasting: 5\n"
Feb  5 14:58:07.777: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  5 14:58:07.777: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  5 14:58:07.777: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  5 14:58:37.864: INFO: Deleting all statefulset in ns statefulset-6706
Feb  5 14:58:37.871: INFO: Scaling statefulset ss to 0
Feb  5 14:58:37.884: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:58:37.888: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:58:37.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6706" for this suite.
Feb  5 14:58:43.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:58:44.119: INFO: namespace statefulset-6706 deletion completed in 6.176920379s

• [SLOW TEST:112.353 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:58:44.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6340
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6340
STEP: Creating statefulset with conflicting port in namespace statefulset-6340
STEP: Waiting until pod test-pod will start running in namespace statefulset-6340
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6340
Feb  5 14:58:54.374: INFO: Observed stateful pod in namespace: statefulset-6340, name: ss-0, uid: 105864b0-7669-4c70-bad9-68296f90245c, status phase: Pending. Waiting for statefulset controller to delete.
Feb  5 14:58:56.501: INFO: Observed stateful pod in namespace: statefulset-6340, name: ss-0, uid: 105864b0-7669-4c70-bad9-68296f90245c, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 14:58:56.534: INFO: Observed stateful pod in namespace: statefulset-6340, name: ss-0, uid: 105864b0-7669-4c70-bad9-68296f90245c, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 14:58:56.542: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6340
STEP: Removing pod with conflicting port in namespace statefulset-6340
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6340 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  5 14:59:06.704: INFO: Deleting all statefulset in ns statefulset-6340
Feb  5 14:59:06.711: INFO: Scaling statefulset ss to 0
Feb  5 14:59:16.781: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 14:59:16.787: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:59:16.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6340" for this suite.
Feb  5 14:59:22.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:59:22.984: INFO: namespace statefulset-6340 deletion completed in 6.171193902s

• [SLOW TEST:38.866 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:59:22.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  5 14:59:23.084: INFO: Waiting up to 5m0s for pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c" in namespace "downward-api-2589" to be "success or failure"
Feb  5 14:59:23.093: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.603203ms
Feb  5 14:59:25.106: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022508295s
Feb  5 14:59:27.114: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030570249s
Feb  5 14:59:29.125: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041638647s
Feb  5 14:59:31.142: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058159169s
STEP: Saw pod success
Feb  5 14:59:31.142: INFO: Pod "downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c" satisfied condition "success or failure"
Feb  5 14:59:31.145: INFO: Trying to get logs from node iruya-node pod downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c container dapi-container: 
STEP: delete the pod
Feb  5 14:59:31.230: INFO: Waiting for pod downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c to disappear
Feb  5 14:59:31.234: INFO: Pod downward-api-f4517157-7c93-432a-b5e1-3f1c2433028c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 14:59:31.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2589" for this suite.
Feb  5 14:59:37.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 14:59:37.463: INFO: namespace downward-api-2589 deletion completed in 6.222449641s

• [SLOW TEST:14.479 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 14:59:37.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 14:59:37.604: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  5 14:59:37.641: INFO: Number of nodes with available pods: 0
Feb  5 14:59:37.641: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:39.133: INFO: Number of nodes with available pods: 0
Feb  5 14:59:39.133: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:39.795: INFO: Number of nodes with available pods: 0
Feb  5 14:59:39.795: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:40.888: INFO: Number of nodes with available pods: 0
Feb  5 14:59:40.888: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:41.670: INFO: Number of nodes with available pods: 0
Feb  5 14:59:41.670: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:42.672: INFO: Number of nodes with available pods: 0
Feb  5 14:59:42.672: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:44.945: INFO: Number of nodes with available pods: 0
Feb  5 14:59:44.945: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:45.730: INFO: Number of nodes with available pods: 0
Feb  5 14:59:45.730: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:46.774: INFO: Number of nodes with available pods: 0
Feb  5 14:59:46.774: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:47.656: INFO: Number of nodes with available pods: 0
Feb  5 14:59:47.656: INFO: Node iruya-node is running more than one daemon pod
Feb  5 14:59:48.668: INFO: Number of nodes with available pods: 2
Feb  5 14:59:48.668: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  5 14:59:48.795: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:48.795: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:49.886: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:49.886: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:50.858: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:50.858: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:51.879: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:51.879: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:52.917: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:52.917: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:53.861: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:53.862: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:54.858: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:54.858: INFO: Wrong image for pod: daemon-set-gnd5v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:54.858: INFO: Pod daemon-set-gnd5v is not available
Feb  5 14:59:55.857: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:55.857: INFO: Pod daemon-set-s6r25 is not available
Feb  5 14:59:56.857: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:56.857: INFO: Pod daemon-set-s6r25 is not available
Feb  5 14:59:57.859: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:57.859: INFO: Pod daemon-set-s6r25 is not available
Feb  5 14:59:58.857: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:58.857: INFO: Pod daemon-set-s6r25 is not available
Feb  5 14:59:59.874: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 14:59:59.874: INFO: Pod daemon-set-s6r25 is not available
Feb  5 15:00:00.867: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:00.867: INFO: Pod daemon-set-s6r25 is not available
Feb  5 15:00:01.859: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:01.859: INFO: Pod daemon-set-s6r25 is not available
Feb  5 15:00:03.464: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:03.859: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:04.954: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:05.862: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:06.858: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:06.858: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:07.856: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:07.856: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:08.876: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:08.877: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:09.870: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:09.870: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:10.857: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:10.857: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:11.862: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:11.862: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:12.855: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:12.855: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:13.869: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:13.869: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:14.864: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:14.864: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:15.858: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:15.858: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:16.858: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:16.858: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:18.322: INFO: Wrong image for pod: daemon-set-5nk4k. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  5 15:00:18.322: INFO: Pod daemon-set-5nk4k is not available
Feb  5 15:00:18.857: INFO: Pod daemon-set-6n7zx is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  5 15:00:18.873: INFO: Number of nodes with available pods: 1
Feb  5 15:00:18.873: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:19.898: INFO: Number of nodes with available pods: 1
Feb  5 15:00:19.898: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:20.892: INFO: Number of nodes with available pods: 1
Feb  5 15:00:20.892: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:21.889: INFO: Number of nodes with available pods: 1
Feb  5 15:00:21.889: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:23.244: INFO: Number of nodes with available pods: 1
Feb  5 15:00:23.245: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:23.970: INFO: Number of nodes with available pods: 1
Feb  5 15:00:23.970: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:24.919: INFO: Number of nodes with available pods: 1
Feb  5 15:00:24.919: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  5 15:00:25.892: INFO: Number of nodes with available pods: 2
Feb  5 15:00:25.893: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6823, will wait for the garbage collector to delete the pods
Feb  5 15:00:25.982: INFO: Deleting DaemonSet.extensions daemon-set took: 9.725979ms
Feb  5 15:00:26.282: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.281083ms
Feb  5 15:00:37.892: INFO: Number of nodes with available pods: 0
Feb  5 15:00:37.892: INFO: Number of running nodes: 0, number of available pods: 0
Feb  5 15:00:37.897: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6823/daemonsets","resourceVersion":"23209446"},"items":null}

Feb  5 15:00:37.901: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6823/pods","resourceVersion":"23209446"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:00:37.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6823" for this suite.
Feb  5 15:00:43.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:00:44.117: INFO: namespace daemonsets-6823 deletion completed in 6.170028632s

• [SLOW TEST:66.653 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:00:44.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:01:16.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7201" for this suite.
Feb  5 15:01:22.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:01:22.651: INFO: namespace namespaces-7201 deletion completed in 6.194698367s
STEP: Destroying namespace "nsdeletetest-5095" for this suite.
Feb  5 15:01:22.654: INFO: Namespace nsdeletetest-5095 was already deleted
STEP: Destroying namespace "nsdeletetest-6874" for this suite.
Feb  5 15:01:28.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:01:28.871: INFO: namespace nsdeletetest-6874 deletion completed in 6.217009061s

• [SLOW TEST:44.754 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:01:28.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  5 15:01:28.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3881'
Feb  5 15:01:29.353: INFO: stderr: ""
Feb  5 15:01:29.353: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  5 15:01:30.370: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:30.370: INFO: Found 0 / 1
Feb  5 15:01:31.371: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:31.371: INFO: Found 0 / 1
Feb  5 15:01:32.365: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:32.365: INFO: Found 0 / 1
Feb  5 15:01:33.374: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:33.374: INFO: Found 0 / 1
Feb  5 15:01:34.376: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:34.376: INFO: Found 0 / 1
Feb  5 15:01:35.368: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:35.368: INFO: Found 0 / 1
Feb  5 15:01:36.364: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:36.364: INFO: Found 0 / 1
Feb  5 15:01:37.375: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:37.375: INFO: Found 0 / 1
Feb  5 15:01:38.370: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:38.370: INFO: Found 1 / 1
Feb  5 15:01:38.370: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  5 15:01:38.377: INFO: Selector matched 1 pods for map[app:redis]
Feb  5 15:01:38.377: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  5 15:01:38.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881'
Feb  5 15:01:38.567: INFO: stderr: ""
Feb  5 15:01:38.568: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Feb 15:01:36.301 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Feb 15:01:36.302 # Server started, Redis version 3.2.12\n1:M 05 Feb 15:01:36.302 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Feb 15:01:36.302 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  5 15:01:38.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881 --tail=1'
Feb  5 15:01:38.715: INFO: stderr: ""
Feb  5 15:01:38.715: INFO: stdout: "1:M 05 Feb 15:01:36.302 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  5 15:01:38.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881 --limit-bytes=1'
Feb  5 15:01:38.843: INFO: stderr: ""
Feb  5 15:01:38.843: INFO: stdout: " "
STEP: exposing timestamps
Feb  5 15:01:38.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881 --tail=1 --timestamps'
Feb  5 15:01:39.004: INFO: stderr: ""
Feb  5 15:01:39.004: INFO: stdout: "2020-02-05T15:01:36.30429444Z 1:M 05 Feb 15:01:36.302 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  5 15:01:41.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881 --since=1s'
Feb  5 15:01:41.704: INFO: stderr: ""
Feb  5 15:01:41.704: INFO: stdout: ""
Feb  5 15:01:41.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4fbk6 redis-master --namespace=kubectl-3881 --since=24h'
Feb  5 15:01:41.971: INFO: stderr: ""
Feb  5 15:01:41.971: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Feb 15:01:36.301 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Feb 15:01:36.302 # Server started, Redis version 3.2.12\n1:M 05 Feb 15:01:36.302 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Feb 15:01:36.302 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  5 15:01:41.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3881'
Feb  5 15:01:42.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 15:01:42.127: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  5 15:01:42.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3881'
Feb  5 15:01:42.294: INFO: stderr: "No resources found.\n"
Feb  5 15:01:42.294: INFO: stdout: ""
Feb  5 15:01:42.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3881 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  5 15:01:42.513: INFO: stderr: ""
Feb  5 15:01:42.513: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:01:42.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3881" for this suite.
Feb  5 15:02:04.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:02:04.648: INFO: namespace kubectl-3881 deletion completed in 22.125081297s

• [SLOW TEST:35.776 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:02:04.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  5 15:02:04.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  5 15:02:04.923: INFO: stderr: ""
Feb  5 15:02:04.923: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:02:04.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4588" for this suite.
Feb  5 15:02:10.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:02:11.068: INFO: namespace kubectl-4588 deletion completed in 6.133984131s

• [SLOW TEST:6.420 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:02:11.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:02:19.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1597" for this suite.
Feb  5 15:02:25.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:02:25.315: INFO: namespace kubelet-test-1597 deletion completed in 6.127271266s

• [SLOW TEST:14.246 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:02:25.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  5 15:02:37.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-197fb0a5-d760-4096-89f3-a5fcf9d5ef8d -c busybox-main-container --namespace=emptydir-5335 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  5 15:02:38.110: INFO: stderr: "I0205 15:02:37.749134    3558 log.go:172] (0xc0009840b0) (0xc00087e0a0) Create stream\nI0205 15:02:37.749448    3558 log.go:172] (0xc0009840b0) (0xc00087e0a0) Stream added, broadcasting: 1\nI0205 15:02:37.762500    3558 log.go:172] (0xc0009840b0) Reply frame received for 1\nI0205 15:02:37.762594    3558 log.go:172] (0xc0009840b0) (0xc00087e140) Create stream\nI0205 15:02:37.762622    3558 log.go:172] (0xc0009840b0) (0xc00087e140) Stream added, broadcasting: 3\nI0205 15:02:37.764712    3558 log.go:172] (0xc0009840b0) Reply frame received for 3\nI0205 15:02:37.764758    3558 log.go:172] (0xc0009840b0) (0xc000a2a000) Create stream\nI0205 15:02:37.764783    3558 log.go:172] (0xc0009840b0) (0xc000a2a000) Stream added, broadcasting: 5\nI0205 15:02:37.766312    3558 log.go:172] (0xc0009840b0) Reply frame received for 5\nI0205 15:02:37.904263    3558 log.go:172] (0xc0009840b0) Data frame received for 3\nI0205 15:02:37.904318    3558 log.go:172] (0xc00087e140) (3) Data frame handling\nI0205 15:02:37.904336    3558 log.go:172] (0xc00087e140) (3) Data frame sent\nI0205 15:02:38.093319    3558 log.go:172] (0xc0009840b0) (0xc00087e140) Stream removed, broadcasting: 3\nI0205 15:02:38.093443    3558 log.go:172] (0xc0009840b0) Data frame received for 1\nI0205 15:02:38.093469    3558 log.go:172] (0xc00087e0a0) (1) Data frame handling\nI0205 15:02:38.093488    3558 log.go:172] (0xc00087e0a0) (1) Data frame sent\nI0205 15:02:38.093532    3558 log.go:172] (0xc0009840b0) (0xc000a2a000) Stream removed, broadcasting: 5\nI0205 15:02:38.093655    3558 log.go:172] (0xc0009840b0) (0xc00087e0a0) Stream removed, broadcasting: 1\nI0205 15:02:38.093681    3558 log.go:172] (0xc0009840b0) Go away received\nI0205 15:02:38.095055    3558 log.go:172] (0xc0009840b0) (0xc00087e0a0) Stream removed, broadcasting: 1\nI0205 15:02:38.095171    3558 log.go:172] (0xc0009840b0) (0xc00087e140) Stream removed, broadcasting: 3\nI0205 15:02:38.095195    3558 log.go:172] (0xc0009840b0) (0xc000a2a000) Stream removed, broadcasting: 5\n"
Feb  5 15:02:38.111: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:02:38.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5335" for this suite.
Feb  5 15:02:44.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:02:44.348: INFO: namespace emptydir-5335 deletion completed in 6.227292944s

• [SLOW TEST:19.033 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:02:44.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-60872ab7-ff59-4e57-918d-d167dd22a46b
STEP: Creating a pod to test consume secrets
Feb  5 15:02:44.500: INFO: Waiting up to 5m0s for pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2" in namespace "secrets-315" to be "success or failure"
Feb  5 15:02:44.552: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 51.94521ms
Feb  5 15:02:46.572: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072285392s
Feb  5 15:02:48.583: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083558428s
Feb  5 15:02:50.607: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10728044s
Feb  5 15:02:52.618: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118102704s
Feb  5 15:02:54.625: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125526829s
Feb  5 15:02:56.632: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.132656901s
STEP: Saw pod success
Feb  5 15:02:56.633: INFO: Pod "pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2" satisfied condition "success or failure"
Feb  5 15:02:56.636: INFO: Trying to get logs from node iruya-node pod pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2 container secret-volume-test: 
STEP: delete the pod
Feb  5 15:02:56.717: INFO: Waiting for pod pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2 to disappear
Feb  5 15:02:56.724: INFO: Pod pod-secrets-4817ac9f-2e98-41af-8021-7f349491aeb2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:02:56.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-315" for this suite.
Feb  5 15:03:02.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:03:02.839: INFO: namespace secrets-315 deletion completed in 6.106369993s

• [SLOW TEST:18.491 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:03:02.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 15:03:02.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996" in namespace "projected-5090" to be "success or failure"
Feb  5 15:03:03.034: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Pending", Reason="", readiness=false. Elapsed: 38.927249ms
Feb  5 15:03:05.042: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046535894s
Feb  5 15:03:07.055: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059931038s
Feb  5 15:03:09.077: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Pending", Reason="", readiness=false. Elapsed: 6.082128519s
Feb  5 15:03:11.087: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09132427s
Feb  5 15:03:13.094: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098661608s
STEP: Saw pod success
Feb  5 15:03:13.094: INFO: Pod "downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996" satisfied condition "success or failure"
Feb  5 15:03:13.098: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996 container client-container: 
STEP: delete the pod
Feb  5 15:03:13.181: INFO: Waiting for pod downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996 to disappear
Feb  5 15:03:13.197: INFO: Pod downwardapi-volume-0a2b0208-f876-44ec-9e11-2fb8f1195996 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:03:13.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5090" for this suite.
Feb  5 15:03:19.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:03:19.422: INFO: namespace projected-5090 deletion completed in 6.219294253s

• [SLOW TEST:16.583 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:03:19.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  5 15:03:19.517: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209873,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 15:03:19.518: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209873,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  5 15:03:29.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209888,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  5 15:03:29.542: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209888,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  5 15:03:39.555: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209902,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 15:03:39.556: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209902,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  5 15:03:49.567: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209915,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 15:03:49.567: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-a,UID:59a42377-93e0-4864-bd46-8b37fb42561c,ResourceVersion:23209915,Generation:0,CreationTimestamp:2020-02-05 15:03:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  5 15:03:59.583: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-b,UID:7996bac8-2875-452f-ba11-44db07c5309f,ResourceVersion:23209929,Generation:0,CreationTimestamp:2020-02-05 15:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 15:03:59.583: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-b,UID:7996bac8-2875-452f-ba11-44db07c5309f,ResourceVersion:23209929,Generation:0,CreationTimestamp:2020-02-05 15:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  5 15:04:09.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-b,UID:7996bac8-2875-452f-ba11-44db07c5309f,ResourceVersion:23209943,Generation:0,CreationTimestamp:2020-02-05 15:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 15:04:09.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7707,SelfLink:/api/v1/namespaces/watch-7707/configmaps/e2e-watch-test-configmap-b,UID:7996bac8-2875-452f-ba11-44db07c5309f,ResourceVersion:23209943,Generation:0,CreationTimestamp:2020-02-05 15:03:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:04:19.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7707" for this suite.
Feb  5 15:04:25.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:04:25.775: INFO: namespace watch-7707 deletion completed in 6.1647188s

• [SLOW TEST:66.353 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:04:25.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-8f8683bc-268f-4a44-a28e-cc1b483bf3e9 in namespace container-probe-4596
Feb  5 15:04:33.943: INFO: Started pod busybox-8f8683bc-268f-4a44-a28e-cc1b483bf3e9 in namespace container-probe-4596
STEP: checking the pod's current state and verifying that restartCount is present
Feb  5 15:04:33.954: INFO: Initial restart count of pod busybox-8f8683bc-268f-4a44-a28e-cc1b483bf3e9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:08:35.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4596" for this suite.
Feb  5 15:08:42.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:08:42.295: INFO: namespace container-probe-4596 deletion completed in 6.295021196s

• [SLOW TEST:256.519 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:08:42.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  5 15:08:42.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-9369'
Feb  5 15:08:44.678: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  5 15:08:44.678: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  5 15:08:46.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9369'
Feb  5 15:08:46.987: INFO: stderr: ""
Feb  5 15:08:46.988: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:08:46.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9369" for this suite.
Feb  5 15:08:53.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:08:53.197: INFO: namespace kubectl-9369 deletion completed in 6.199431215s

• [SLOW TEST:10.901 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:08:53.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-0fc9dd16-970e-4809-88b2-bbeec1c4c65c
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:09:03.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7707" for this suite.
Feb  5 15:09:27.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:09:27.799: INFO: namespace configmap-7707 deletion completed in 24.404663715s

• [SLOW TEST:34.602 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:09:27.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-71502348-12b2-4591-ad78-a68609df7c8a
STEP: Creating a pod to test consume configMaps
Feb  5 15:09:27.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386" in namespace "projected-5893" to be "success or failure"
Feb  5 15:09:27.954: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Pending", Reason="", readiness=false. Elapsed: 11.819578ms
Feb  5 15:09:29.965: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022613078s
Feb  5 15:09:31.984: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041054789s
Feb  5 15:09:33.996: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053413407s
Feb  5 15:09:36.004: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061399424s
Feb  5 15:09:38.010: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067174098s
STEP: Saw pod success
Feb  5 15:09:38.010: INFO: Pod "pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386" satisfied condition "success or failure"
Feb  5 15:09:38.013: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 15:09:38.076: INFO: Waiting for pod pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386 to disappear
Feb  5 15:09:38.089: INFO: Pod pod-projected-configmaps-0dc33dad-ac16-4c73-aba3-bba94911c386 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:09:38.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5893" for this suite.
Feb  5 15:09:44.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:09:44.234: INFO: namespace projected-5893 deletion completed in 6.136412847s

• [SLOW TEST:16.434 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:09:44.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0205 15:10:15.592926       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  5 15:10:15.593: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:10:15.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6829" for this suite.
Feb  5 15:10:22.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:10:22.691: INFO: namespace gc-6829 deletion completed in 7.089977223s

• [SLOW TEST:38.456 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:10:22.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 15:10:22.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:10:31.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-249" for this suite.
Feb  5 15:11:19.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:11:19.585: INFO: namespace pods-249 deletion completed in 48.161003727s

• [SLOW TEST:56.893 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  5 15:11:19.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  5 15:11:19.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476" in namespace "downward-api-6577" to be "success or failure"
Feb  5 15:11:19.865: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Pending", Reason="", readiness=false. Elapsed: 19.629131ms
Feb  5 15:11:22.127: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.282070171s
Feb  5 15:11:24.145: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299658586s
Feb  5 15:11:26.155: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Pending", Reason="", readiness=false. Elapsed: 6.310484506s
Feb  5 15:11:28.165: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319729276s
Feb  5 15:11:30.176: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.331179938s
STEP: Saw pod success
Feb  5 15:11:30.176: INFO: Pod "downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476" satisfied condition "success or failure"
Feb  5 15:11:30.182: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476 container client-container: 
STEP: delete the pod
Feb  5 15:11:30.553: INFO: Waiting for pod downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476 to disappear
Feb  5 15:11:30.564: INFO: Pod downwardapi-volume-d811cc63-9bc1-4158-a843-33cd71e52476 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 15:11:30.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6577" for this suite.
Feb  5 15:11:36.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  5 15:11:36.714: INFO: namespace downward-api-6577 deletion completed in 6.141669658s

• [SLOW TEST:17.130 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSFeb  5 15:11:36.715: INFO: Running AfterSuite actions on all nodes
Feb  5 15:11:36.715: INFO: Running AfterSuite actions on node 1
Feb  5 15:11:36.715: INFO: Skipping dumping logs from cluster

Ran 215 of 4412 Specs in 8122.307 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped
PASS