I0206 12:56:04.249045 8 e2e.go:243] Starting e2e run "8ec36a74-021e-4f2a-a3db-95518977ef3a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580993763 - Will randomize all specs Will run 215 of 4412 specs Feb 6 12:56:04.561: INFO: >>> kubeConfig: /root/.kube/config Feb 6 12:56:04.566: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 6 12:56:04.595: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 6 12:56:04.633: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 6 12:56:04.633: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 6 12:56:04.633: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 6 12:56:04.640: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 6 12:56:04.640: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 6 12:56:04.640: INFO: e2e test version: v1.15.7 Feb 6 12:56:04.642: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:56:04.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc Feb 6 12:56:04.836: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0206 12:56:35.034827 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 6 12:56:35.034: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:56:35.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1065" for this suite. Feb 6 12:56:41.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:56:41.210: INFO: namespace gc-1065 deletion completed in 6.172175246s • [SLOW TEST:36.569 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:56:41.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 6 12:56:58.523: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:56:58.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4620" for this suite. Feb 6 12:57:04.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:57:04.946: INFO: namespace container-runtime-4620 deletion completed in 6.254684853s • [SLOW TEST:23.735 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:57:04.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-89wh STEP: Creating a pod to test atomic-volume-subpath Feb 6 12:57:05.102: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-89wh" in namespace "subpath-4638" to be "success or failure" Feb 6 12:57:05.118: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.416053ms Feb 6 12:57:07.124: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022586851s Feb 6 12:57:09.130: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028165636s Feb 6 12:57:11.140: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038167195s Feb 6 12:57:13.148: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046324144s Feb 6 12:57:15.156: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 10.0539675s Feb 6 12:57:17.162: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 12.059986009s Feb 6 12:57:19.170: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 14.068857422s Feb 6 12:57:21.184: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 16.082399214s Feb 6 12:57:23.194: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 18.092180186s Feb 6 12:57:25.201: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 20.09959264s Feb 6 12:57:27.209: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 22.107097705s Feb 6 12:57:29.219: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 24.117026488s Feb 6 12:57:31.225: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 26.12337099s Feb 6 12:57:33.233: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Running", Reason="", readiness=true. Elapsed: 28.131721039s Feb 6 12:57:35.241: INFO: Pod "pod-subpath-test-configmap-89wh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.139326514s STEP: Saw pod success Feb 6 12:57:35.241: INFO: Pod "pod-subpath-test-configmap-89wh" satisfied condition "success or failure" Feb 6 12:57:35.245: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-89wh container test-container-subpath-configmap-89wh: STEP: delete the pod Feb 6 12:57:35.311: INFO: Waiting for pod pod-subpath-test-configmap-89wh to disappear Feb 6 12:57:35.318: INFO: Pod pod-subpath-test-configmap-89wh no longer exists STEP: Deleting pod pod-subpath-test-configmap-89wh Feb 6 12:57:35.318: INFO: Deleting pod "pod-subpath-test-configmap-89wh" in namespace "subpath-4638" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:57:35.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4638" for this suite. Feb 6 12:57:41.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:57:41.538: INFO: namespace subpath-4638 deletion completed in 6.198287735s • [SLOW TEST:36.592 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:57:41.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 6 12:57:41.636: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7236,SelfLink:/api/v1/namespaces/watch-7236/configmaps/e2e-watch-test-watch-closed,UID:17d3791a-0bec-4579-ac70-be857f116b11,ResourceVersion:23314671,Generation:0,CreationTimestamp:2020-02-06 12:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 6 12:57:41.636: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7236,SelfLink:/api/v1/namespaces/watch-7236/configmaps/e2e-watch-test-watch-closed,UID:17d3791a-0bec-4579-ac70-be857f116b11,ResourceVersion:23314672,Generation:0,CreationTimestamp:2020-02-06 12:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 6 12:57:41.652: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7236,SelfLink:/api/v1/namespaces/watch-7236/configmaps/e2e-watch-test-watch-closed,UID:17d3791a-0bec-4579-ac70-be857f116b11,ResourceVersion:23314673,Generation:0,CreationTimestamp:2020-02-06 12:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 6 12:57:41.653: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-7236,SelfLink:/api/v1/namespaces/watch-7236/configmaps/e2e-watch-test-watch-closed,UID:17d3791a-0bec-4579-ac70-be857f116b11,ResourceVersion:23314674,Generation:0,CreationTimestamp:2020-02-06 12:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:57:41.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7236" for this suite. Feb 6 12:57:47.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:57:47.833: INFO: namespace watch-7236 deletion completed in 6.172933493s • [SLOW TEST:6.295 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:57:47.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Feb 6 12:57:57.967: INFO: Pod pod-hostip-fd741d54-afa0-479e-8ce2-1f84c21b291e has hostIP: 10.96.3.65 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:57:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5970" for this suite. Feb 6 12:58:19.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:58:20.101: INFO: namespace pods-5970 deletion completed in 22.130735235s • [SLOW TEST:32.268 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:58:20.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-9vpg STEP: Creating a pod to test atomic-volume-subpath Feb 6 12:58:20.261: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9vpg" in namespace "subpath-5163" to be "success or failure" Feb 6 12:58:20.274: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.298403ms Feb 6 12:58:22.283: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021761426s Feb 6 12:58:24.292: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03002874s Feb 6 12:58:26.301: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039081328s Feb 6 12:58:28.307: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04551775s Feb 6 12:58:30.315: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 10.053341096s Feb 6 12:58:32.324: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 12.061988077s Feb 6 12:58:34.331: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 14.069851411s Feb 6 12:58:36.339: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 16.077679246s Feb 6 12:58:38.345: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 18.08339118s Feb 6 12:58:40.358: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 20.096850031s Feb 6 12:58:42.375: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 22.113396749s Feb 6 12:58:44.384: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 24.122820819s Feb 6 12:58:46.396: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 26.134873556s Feb 6 12:58:48.408: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 28.146202546s Feb 6 12:58:50.417: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Running", Reason="", readiness=true. Elapsed: 30.154934815s Feb 6 12:58:52.435: INFO: Pod "pod-subpath-test-projected-9vpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.173482767s STEP: Saw pod success Feb 6 12:58:52.435: INFO: Pod "pod-subpath-test-projected-9vpg" satisfied condition "success or failure" Feb 6 12:58:52.442: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-9vpg container test-container-subpath-projected-9vpg: STEP: delete the pod Feb 6 12:58:52.561: INFO: Waiting for pod pod-subpath-test-projected-9vpg to disappear Feb 6 12:58:52.660: INFO: Pod pod-subpath-test-projected-9vpg no longer exists STEP: Deleting pod pod-subpath-test-projected-9vpg Feb 6 12:58:52.660: INFO: Deleting pod "pod-subpath-test-projected-9vpg" in namespace "subpath-5163" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:58:52.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5163" for this suite. Feb 6 12:58:58.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 12:58:58.827: INFO: namespace subpath-5163 deletion completed in 6.157741818s • [SLOW TEST:38.726 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 12:58:58.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 12:59:59.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8811" for this suite. Feb 6 13:00:21.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:00:21.285: INFO: namespace container-probe-8811 deletion completed in 22.225362631s • [SLOW TEST:82.458 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:00:21.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bcf52289-10da-42a0-acb5-fc4e6e727bdc STEP: Creating a pod to test consume secrets Feb 6 13:00:21.563: INFO: Waiting up to 5m0s for pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a" in namespace "secrets-584" to be "success or failure" Feb 6 13:00:21.653: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 90.162229ms Feb 6 13:00:23.668: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104490185s Feb 6 13:00:25.681: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11748663s Feb 6 13:00:27.691: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128295647s Feb 6 13:00:29.700: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136752484s Feb 6 13:00:31.714: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15055963s Feb 6 13:00:33.725: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.161570999s STEP: Saw pod success Feb 6 13:00:33.725: INFO: Pod "pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a" satisfied condition "success or failure" Feb 6 13:00:33.730: INFO: Trying to get logs from node iruya-node pod pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a container secret-volume-test: STEP: delete the pod Feb 6 13:00:33.823: INFO: Waiting for pod pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a to disappear Feb 6 13:00:33.911: INFO: Pod pod-secrets-cebd28dc-6faa-47d5-aa13-9997fb11739a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:00:33.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-584" for this suite. Feb 6 13:00:39.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:00:40.083: INFO: namespace secrets-584 deletion completed in 6.158847637s STEP: Destroying namespace "secret-namespace-1543" for this suite. Feb 6 13:00:46.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:00:46.168: INFO: namespace secret-namespace-1543 deletion completed in 6.084786025s • [SLOW TEST:24.883 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:00:46.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 6 13:00:46.260: INFO: Waiting up to 5m0s for pod "pod-59da9857-6fd6-452e-8701-d28503ec2061" in namespace "emptydir-9298" to be "success or failure" Feb 6 13:00:46.360: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Pending", Reason="", readiness=false. Elapsed: 100.590273ms Feb 6 13:00:48.368: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108505209s Feb 6 13:00:50.379: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119610819s Feb 6 13:00:52.389: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129282441s Feb 6 13:00:54.399: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139312604s Feb 6 13:00:56.510: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.250194295s STEP: Saw pod success Feb 6 13:00:56.510: INFO: Pod "pod-59da9857-6fd6-452e-8701-d28503ec2061" satisfied condition "success or failure" Feb 6 13:00:56.516: INFO: Trying to get logs from node iruya-node pod pod-59da9857-6fd6-452e-8701-d28503ec2061 container test-container: STEP: delete the pod Feb 6 13:00:56.727: INFO: Waiting for pod pod-59da9857-6fd6-452e-8701-d28503ec2061 to disappear Feb 6 13:00:56.743: INFO: Pod pod-59da9857-6fd6-452e-8701-d28503ec2061 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:00:56.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9298" for this suite. Feb 6 13:01:02.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:01:02.960: INFO: namespace emptydir-9298 deletion completed in 6.206192514s • [SLOW TEST:16.792 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:01:02.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-be9c48b5-a536-46ec-be08-c03f83819296 in namespace container-probe-4967 Feb 6 13:01:13.155: INFO: Started pod liveness-be9c48b5-a536-46ec-be08-c03f83819296 in namespace container-probe-4967 STEP: checking the pod's current state and verifying that restartCount is present Feb 6 13:01:13.164: INFO: Initial restart count of pod liveness-be9c48b5-a536-46ec-be08-c03f83819296 is 0 Feb 6 13:01:31.247: INFO: Restart count of pod container-probe-4967/liveness-be9c48b5-a536-46ec-be08-c03f83819296 is now 1 (18.082744197s elapsed) Feb 6 13:01:53.515: INFO: Restart count of pod container-probe-4967/liveness-be9c48b5-a536-46ec-be08-c03f83819296 is now 2 (40.350937961s elapsed) Feb 6 13:02:13.618: INFO: Restart count of pod container-probe-4967/liveness-be9c48b5-a536-46ec-be08-c03f83819296 is now 3 (1m0.454488847s elapsed) Feb 6 13:02:35.751: INFO: Restart count of pod container-probe-4967/liveness-be9c48b5-a536-46ec-be08-c03f83819296 is now 4 (1m22.587206558s elapsed) Feb 6 13:03:36.191: INFO: Restart count of pod container-probe-4967/liveness-be9c48b5-a536-46ec-be08-c03f83819296 is now 5 (2m23.027159298s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:03:36.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4967" for this suite. Feb 6 13:03:42.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:03:42.358: INFO: namespace container-probe-4967 deletion completed in 6.125441633s • [SLOW TEST:159.397 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:03:42.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-a98c9b13-5b5a-4977-a8c2-5931135e5852 in namespace container-probe-6180 Feb 6 13:03:52.508: INFO: Started pod busybox-a98c9b13-5b5a-4977-a8c2-5931135e5852 in namespace container-probe-6180 STEP: checking the pod's current state and verifying that restartCount is present Feb 6 13:03:52.512: INFO: Initial restart count of pod busybox-a98c9b13-5b5a-4977-a8c2-5931135e5852 is 0 Feb 6 13:04:49.281: INFO: Restart count of pod container-probe-6180/busybox-a98c9b13-5b5a-4977-a8c2-5931135e5852 is now 1 (56.768944314s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:04:49.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6180" for this suite. Feb 6 13:04:55.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:04:55.505: INFO: namespace container-probe-6180 deletion completed in 6.164735826s • [SLOW TEST:73.146 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:04:55.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 6 13:04:55.652: INFO: Waiting up to 5m0s for pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150" in namespace "emptydir-7281" to be "success or failure" Feb 6 13:04:55.666: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 13.742773ms Feb 6 13:04:57.675: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022863553s Feb 6 13:04:59.690: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037511202s Feb 6 13:05:01.697: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044510708s Feb 6 13:05:03.906: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 8.253843295s Feb 6 13:05:05.914: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261478393s Feb 6 13:05:07.921: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.269136582s STEP: Saw pod success Feb 6 13:05:07.922: INFO: Pod "pod-e03e748f-7e73-4f67-87c2-a6d74338b150" satisfied condition "success or failure" Feb 6 13:05:07.925: INFO: Trying to get logs from node iruya-node pod pod-e03e748f-7e73-4f67-87c2-a6d74338b150 container test-container: STEP: delete the pod Feb 6 13:05:07.969: INFO: Waiting for pod pod-e03e748f-7e73-4f67-87c2-a6d74338b150 to disappear Feb 6 13:05:08.100: INFO: Pod pod-e03e748f-7e73-4f67-87c2-a6d74338b150 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:05:08.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7281" for this suite. Feb 6 13:05:14.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:05:14.241: INFO: namespace emptydir-7281 deletion completed in 6.132247895s • [SLOW TEST:18.736 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:05:14.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 6 13:05:14.500: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 6 13:05:19.681: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 6 13:05:23.693: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 6 13:05:25.702: INFO: Creating deployment "test-rollover-deployment" Feb 6 13:05:25.724: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 6 13:05:27.742: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 6 13:05:27.752: INFO: Ensure that both replica sets have 1 created replica Feb 6 13:05:27.763: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 6 13:05:27.792: INFO: Updating deployment test-rollover-deployment Feb 6 13:05:27.792: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 6 13:05:30.122: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 6 13:05:30.185: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 6 13:05:30.195: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:30.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591128, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:32.217: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:32.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591128, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:34.780: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:34.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591128, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:36.225: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:36.225: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591128, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:38.229: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:38.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591128, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:40.212: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:40.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591139, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:42.205: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:42.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591139, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:44.209: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:44.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591139, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:46.217: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:46.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591139, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:48.214: INFO: all replica sets need to contain the pod-template-hash label Feb 6 13:05:48.214: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591139, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716591125, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 6 13:05:50.207: INFO: Feb 6 13:05:50.207: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 6 13:05:50.221: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7612,SelfLink:/apis/apps/v1/namespaces/deployment-7612/deployments/test-rollover-deployment,UID:f3002fdc-653c-4fde-8d32-589620a29a96,ResourceVersion:23315649,Generation:2,CreationTimestamp:2020-02-06 13:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-06 13:05:25 +0000 UTC 2020-02-06 13:05:25 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-06 13:05:49 +0000 UTC 2020-02-06 13:05:25 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 6 13:05:50.226: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7612,SelfLink:/apis/apps/v1/namespaces/deployment-7612/replicasets/test-rollover-deployment-854595fc44,UID:2aa762ab-6c23-4c5f-9081-45866b23a9f7,ResourceVersion:23315638,Generation:2,CreationTimestamp:2020-02-06 13:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3002fdc-653c-4fde-8d32-589620a29a96 0xc000acd6a7 0xc000acd6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 6 13:05:50.226: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 6 13:05:50.227: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7612,SelfLink:/apis/apps/v1/namespaces/deployment-7612/replicasets/test-rollover-controller,UID:1a4ceb49-e597-420d-ba00-8a3f7d63d982,ResourceVersion:23315647,Generation:2,CreationTimestamp:2020-02-06 13:05:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3002fdc-653c-4fde-8d32-589620a29a96 0xc000acd5c7 0xc000acd5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 6 13:05:50.227: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7612,SelfLink:/apis/apps/v1/namespaces/deployment-7612/replicasets/test-rollover-deployment-9b8b997cf,UID:9c180de2-cf06-4269-ae06-275fffb9304e,ResourceVersion:23315600,Generation:2,CreationTimestamp:2020-02-06 13:05:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f3002fdc-653c-4fde-8d32-589620a29a96 0xc000acd770 0xc000acd771}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 6 13:05:50.232: INFO: Pod "test-rollover-deployment-854595fc44-8pnsp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-8pnsp,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7612,SelfLink:/api/v1/namespaces/deployment-7612/pods/test-rollover-deployment-854595fc44-8pnsp,UID:05da2ac9-9946-475f-b9b3-0fdb5bbf07a7,ResourceVersion:23315622,Generation:0,CreationTimestamp:2020-02-06 13:05:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 2aa762ab-6c23-4c5f-9081-45866b23a9f7 0xc000a24b27 0xc000a24b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-prvzp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-prvzp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-prvzp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000a24bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000a24be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:05:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:05:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:05:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:05:28 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-06 13:05:28 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-06 13:05:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://6c06d2b7aea4a031db487625a9baa903bcf4a03cd71753dcdeb88927e601c7b2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:05:50.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7612" for this suite. Feb 6 13:05:58.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:05:58.396: INFO: namespace deployment-7612 deletion completed in 8.158986284s • [SLOW TEST:44.155 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:05:58.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 6 13:06:26.231: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:26.231: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:26.308513 8 log.go:172] (0xc000101760) (0xc000392dc0) Create stream I0206 13:06:26.308555 8 log.go:172] (0xc000101760) (0xc000392dc0) Stream added, broadcasting: 1 I0206 13:06:26.316599 8 log.go:172] (0xc000101760) Reply frame received for 1 I0206 13:06:26.316663 8 log.go:172] (0xc000101760) (0xc000942460) Create stream I0206 13:06:26.316683 8 log.go:172] (0xc000101760) (0xc000942460) Stream added, broadcasting: 3 I0206 13:06:26.318223 8 log.go:172] (0xc000101760) Reply frame received for 3 I0206 13:06:26.318253 8 log.go:172] (0xc000101760) (0xc001e60b40) Create stream I0206 13:06:26.318272 8 log.go:172] (0xc000101760) (0xc001e60b40) Stream added, broadcasting: 5 I0206 13:06:26.321152 8 log.go:172] (0xc000101760) Reply frame received for 5 I0206 13:06:26.489738 8 log.go:172] (0xc000101760) Data frame received for 3 I0206 13:06:26.489772 8 log.go:172] (0xc000942460) (3) Data frame handling I0206 13:06:26.489793 8 log.go:172] (0xc000942460) (3) Data frame sent I0206 13:06:26.878220 8 log.go:172] (0xc000101760) (0xc000942460) Stream removed, broadcasting: 3 I0206 13:06:26.878426 8 log.go:172] (0xc000101760) Data frame received for 1 I0206 13:06:26.878440 8 log.go:172] (0xc000392dc0) (1) Data frame handling I0206 13:06:26.878449 8 log.go:172] (0xc000392dc0) (1) Data frame sent I0206 13:06:26.878456 8 log.go:172] (0xc000101760) (0xc000392dc0) Stream removed, broadcasting: 1 I0206 13:06:26.878507 8 log.go:172] (0xc000101760) (0xc001e60b40) Stream removed, broadcasting: 5 I0206 13:06:26.878851 8 log.go:172] (0xc000101760) Go away received I0206 13:06:26.879009 8 log.go:172] (0xc000101760) (0xc000392dc0) Stream removed, broadcasting: 1 I0206 13:06:26.879036 8 log.go:172] (0xc000101760) (0xc000942460) Stream removed, broadcasting: 3 I0206 13:06:26.879041 8 log.go:172] (0xc000101760) (0xc001e60b40) Stream removed, broadcasting: 5 Feb 6 13:06:26.879: INFO: Exec stderr: "" Feb 6 13:06:26.879: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:26.879: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:26.980943 8 log.go:172] (0xc0014aaa50) (0xc000942a00) Create stream I0206 13:06:26.980975 8 log.go:172] (0xc0014aaa50) (0xc000942a00) Stream added, broadcasting: 1 I0206 13:06:26.989907 8 log.go:172] (0xc0014aaa50) Reply frame received for 1 I0206 13:06:26.989968 8 log.go:172] (0xc0014aaa50) (0xc0015068c0) Create stream I0206 13:06:26.989986 8 log.go:172] (0xc0014aaa50) (0xc0015068c0) Stream added, broadcasting: 3 I0206 13:06:26.991617 8 log.go:172] (0xc0014aaa50) Reply frame received for 3 I0206 13:06:26.991638 8 log.go:172] (0xc0014aaa50) (0xc000392f00) Create stream I0206 13:06:26.991648 8 log.go:172] (0xc0014aaa50) (0xc000392f00) Stream added, broadcasting: 5 I0206 13:06:26.993820 8 log.go:172] (0xc0014aaa50) Reply frame received for 5 I0206 13:06:27.097751 8 log.go:172] (0xc0014aaa50) Data frame received for 3 I0206 13:06:27.097792 8 log.go:172] (0xc0015068c0) (3) Data frame handling I0206 13:06:27.097832 8 log.go:172] (0xc0015068c0) (3) Data frame sent I0206 13:06:27.187801 8 log.go:172] (0xc0014aaa50) Data frame received for 1 I0206 13:06:27.187858 8 log.go:172] (0xc000942a00) (1) Data frame handling I0206 13:06:27.187870 8 log.go:172] (0xc000942a00) (1) Data frame sent I0206 13:06:27.187893 8 log.go:172] (0xc0014aaa50) (0xc0015068c0) Stream removed, broadcasting: 3 I0206 13:06:27.187962 8 log.go:172] (0xc0014aaa50) (0xc000392f00) Stream removed, broadcasting: 5 I0206 13:06:27.188028 8 log.go:172] (0xc0014aaa50) (0xc000942a00) Stream removed, broadcasting: 1 I0206 13:06:27.188067 8 log.go:172] (0xc0014aaa50) Go away received I0206 13:06:27.188178 8 log.go:172] (0xc0014aaa50) (0xc000942a00) Stream removed, broadcasting: 1 I0206 13:06:27.188199 8 log.go:172] (0xc0014aaa50) (0xc0015068c0) Stream removed, broadcasting: 3 I0206 13:06:27.188207 8 log.go:172] (0xc0014aaa50) (0xc000392f00) Stream removed, broadcasting: 5 Feb 6 13:06:27.188: INFO: Exec stderr: "" Feb 6 13:06:27.188: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:27.188: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:27.238930 8 log.go:172] (0xc001060b00) (0xc00175e0a0) Create stream I0206 13:06:27.238979 8 log.go:172] (0xc001060b00) (0xc00175e0a0) Stream added, broadcasting: 1 I0206 13:06:27.244802 8 log.go:172] (0xc001060b00) Reply frame received for 1 I0206 13:06:27.244822 8 log.go:172] (0xc001060b00) (0xc001e60be0) Create stream I0206 13:06:27.244828 8 log.go:172] (0xc001060b00) (0xc001e60be0) Stream added, broadcasting: 3 I0206 13:06:27.246431 8 log.go:172] (0xc001060b00) Reply frame received for 3 I0206 13:06:27.246481 8 log.go:172] (0xc001060b00) (0xc00175e140) Create stream I0206 13:06:27.246487 8 log.go:172] (0xc001060b00) (0xc00175e140) Stream added, broadcasting: 5 I0206 13:06:27.247527 8 log.go:172] (0xc001060b00) Reply frame received for 5 I0206 13:06:27.327972 8 log.go:172] (0xc001060b00) Data frame received for 3 I0206 13:06:27.328005 8 log.go:172] (0xc001e60be0) (3) Data frame handling I0206 13:06:27.328017 8 log.go:172] (0xc001e60be0) (3) Data frame sent I0206 13:06:27.437141 8 log.go:172] (0xc001060b00) (0xc001e60be0) Stream removed, broadcasting: 3 I0206 13:06:27.437321 8 log.go:172] (0xc001060b00) Data frame received for 1 I0206 13:06:27.437334 8 log.go:172] (0xc00175e0a0) (1) Data frame handling I0206 13:06:27.437349 8 log.go:172] (0xc00175e0a0) (1) Data frame sent I0206 13:06:27.437376 8 log.go:172] (0xc001060b00) (0xc00175e0a0) Stream removed, broadcasting: 1 I0206 13:06:27.437533 8 log.go:172] (0xc001060b00) (0xc00175e140) Stream removed, broadcasting: 5 I0206 13:06:27.437565 8 log.go:172] (0xc001060b00) (0xc00175e0a0) Stream removed, broadcasting: 1 I0206 13:06:27.437585 8 log.go:172] (0xc001060b00) (0xc001e60be0) Stream removed, broadcasting: 3 I0206 13:06:27.437594 8 log.go:172] (0xc001060b00) (0xc00175e140) Stream removed, broadcasting: 5 Feb 6 13:06:27.437: INFO: Exec stderr: "" Feb 6 13:06:27.437: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:27.437: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:27.438420 8 log.go:172] (0xc001060b00) Go away received I0206 13:06:27.489026 8 log.go:172] (0xc001061970) (0xc00175e640) Create stream I0206 13:06:27.489051 8 log.go:172] (0xc001061970) (0xc00175e640) Stream added, broadcasting: 1 I0206 13:06:27.493728 8 log.go:172] (0xc001061970) Reply frame received for 1 I0206 13:06:27.493757 8 log.go:172] (0xc001061970) (0xc000942c80) Create stream I0206 13:06:27.493767 8 log.go:172] (0xc001061970) (0xc000942c80) Stream added, broadcasting: 3 I0206 13:06:27.494756 8 log.go:172] (0xc001061970) Reply frame received for 3 I0206 13:06:27.494783 8 log.go:172] (0xc001061970) (0xc001e60c80) Create stream I0206 13:06:27.494794 8 log.go:172] (0xc001061970) (0xc001e60c80) Stream added, broadcasting: 5 I0206 13:06:27.495933 8 log.go:172] (0xc001061970) Reply frame received for 5 I0206 13:06:27.580588 8 log.go:172] (0xc001061970) Data frame received for 3 I0206 13:06:27.580673 8 log.go:172] (0xc000942c80) (3) Data frame handling I0206 13:06:27.580682 8 log.go:172] (0xc000942c80) (3) Data frame sent I0206 13:06:27.688835 8 log.go:172] (0xc001061970) (0xc000942c80) Stream removed, broadcasting: 3 I0206 13:06:27.688910 8 log.go:172] (0xc001061970) Data frame received for 1 I0206 13:06:27.688921 8 log.go:172] (0xc00175e640) (1) Data frame handling I0206 13:06:27.688946 8 log.go:172] (0xc00175e640) (1) Data frame sent I0206 13:06:27.689001 8 log.go:172] (0xc001061970) (0xc00175e640) Stream removed, broadcasting: 1 I0206 13:06:27.689053 8 log.go:172] (0xc001061970) (0xc001e60c80) Stream removed, broadcasting: 5 I0206 13:06:27.689077 8 log.go:172] (0xc001061970) Go away received I0206 13:06:27.689097 8 log.go:172] (0xc001061970) (0xc00175e640) Stream removed, broadcasting: 1 I0206 13:06:27.689109 8 log.go:172] (0xc001061970) (0xc000942c80) Stream removed, broadcasting: 3 I0206 13:06:27.689117 8 log.go:172] (0xc001061970) (0xc001e60c80) Stream removed, broadcasting: 5 Feb 6 13:06:27.689: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 6 13:06:27.689: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:27.689: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:27.735906 8 log.go:172] (0xc0015f08f0) (0xc000393400) Create stream I0206 13:06:27.735971 8 log.go:172] (0xc0015f08f0) (0xc000393400) Stream added, broadcasting: 1 I0206 13:06:27.780574 8 log.go:172] (0xc0015f08f0) Reply frame received for 1 I0206 13:06:27.780633 8 log.go:172] (0xc0015f08f0) (0xc001e60d20) Create stream I0206 13:06:27.780641 8 log.go:172] (0xc0015f08f0) (0xc001e60d20) Stream added, broadcasting: 3 I0206 13:06:27.784144 8 log.go:172] (0xc0015f08f0) Reply frame received for 3 I0206 13:06:27.784207 8 log.go:172] (0xc0015f08f0) (0xc000942e60) Create stream I0206 13:06:27.784216 8 log.go:172] (0xc0015f08f0) (0xc000942e60) Stream added, broadcasting: 5 I0206 13:06:27.786792 8 log.go:172] (0xc0015f08f0) Reply frame received for 5 I0206 13:06:27.927230 8 log.go:172] (0xc0015f08f0) Data frame received for 3 I0206 13:06:27.927355 8 log.go:172] (0xc001e60d20) (3) Data frame handling I0206 13:06:27.927370 8 log.go:172] (0xc001e60d20) (3) Data frame sent I0206 13:06:28.073326 8 log.go:172] (0xc0015f08f0) Data frame received for 1 I0206 13:06:28.073428 8 log.go:172] (0xc0015f08f0) (0xc001e60d20) Stream removed, broadcasting: 3 I0206 13:06:28.073513 8 log.go:172] (0xc000393400) (1) Data frame handling I0206 13:06:28.073539 8 log.go:172] (0xc000393400) (1) Data frame sent I0206 13:06:28.073830 8 log.go:172] (0xc0015f08f0) (0xc000393400) Stream removed, broadcasting: 1 I0206 13:06:28.073956 8 log.go:172] (0xc0015f08f0) (0xc000942e60) Stream removed, broadcasting: 5 I0206 13:06:28.073979 8 log.go:172] (0xc0015f08f0) Go away received I0206 13:06:28.074998 8 log.go:172] (0xc0015f08f0) (0xc000393400) Stream removed, broadcasting: 1 I0206 13:06:28.075017 8 log.go:172] (0xc0015f08f0) (0xc001e60d20) Stream removed, broadcasting: 3 I0206 13:06:28.075022 8 log.go:172] (0xc0015f08f0) (0xc000942e60) Stream removed, broadcasting: 5 Feb 6 13:06:28.075: INFO: Exec stderr: "" Feb 6 13:06:28.075: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:28.075: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:28.131469 8 log.go:172] (0xc0018a4370) (0xc00175ebe0) Create stream I0206 13:06:28.131496 8 log.go:172] (0xc0018a4370) (0xc00175ebe0) Stream added, broadcasting: 1 I0206 13:06:28.134643 8 log.go:172] (0xc0018a4370) Reply frame received for 1 I0206 13:06:28.134670 8 log.go:172] (0xc0018a4370) (0xc00175ec80) Create stream I0206 13:06:28.134681 8 log.go:172] (0xc0018a4370) (0xc00175ec80) Stream added, broadcasting: 3 I0206 13:06:28.136040 8 log.go:172] (0xc0018a4370) Reply frame received for 3 I0206 13:06:28.136070 8 log.go:172] (0xc0018a4370) (0xc001506aa0) Create stream I0206 13:06:28.136088 8 log.go:172] (0xc0018a4370) (0xc001506aa0) Stream added, broadcasting: 5 I0206 13:06:28.138081 8 log.go:172] (0xc0018a4370) Reply frame received for 5 I0206 13:06:28.270828 8 log.go:172] (0xc0018a4370) Data frame received for 3 I0206 13:06:28.270884 8 log.go:172] (0xc00175ec80) (3) Data frame handling I0206 13:06:28.270921 8 log.go:172] (0xc00175ec80) (3) Data frame sent I0206 13:06:28.397678 8 log.go:172] (0xc0018a4370) (0xc00175ec80) Stream removed, broadcasting: 3 I0206 13:06:28.397825 8 log.go:172] (0xc0018a4370) Data frame received for 1 I0206 13:06:28.397867 8 log.go:172] (0xc0018a4370) (0xc001506aa0) Stream removed, broadcasting: 5 I0206 13:06:28.397918 8 log.go:172] (0xc00175ebe0) (1) Data frame handling I0206 13:06:28.397939 8 log.go:172] (0xc00175ebe0) (1) Data frame sent I0206 13:06:28.397955 8 log.go:172] (0xc0018a4370) (0xc00175ebe0) Stream removed, broadcasting: 1 I0206 13:06:28.398022 8 log.go:172] (0xc0018a4370) Go away received I0206 13:06:28.398196 8 log.go:172] (0xc0018a4370) (0xc00175ebe0) Stream removed, broadcasting: 1 I0206 13:06:28.398338 8 log.go:172] (0xc0018a4370) (0xc00175ec80) Stream removed, broadcasting: 3 I0206 13:06:28.398359 8 log.go:172] (0xc0018a4370) (0xc001506aa0) Stream removed, broadcasting: 5 Feb 6 13:06:28.398: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 6 13:06:28.398: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:28.398: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:28.479781 8 log.go:172] (0xc0014b73f0) (0xc001e610e0) Create stream I0206 13:06:28.479816 8 log.go:172] (0xc0014b73f0) (0xc001e610e0) Stream added, broadcasting: 1 I0206 13:06:28.487527 8 log.go:172] (0xc0014b73f0) Reply frame received for 1 I0206 13:06:28.487556 8 log.go:172] (0xc0014b73f0) (0xc000942fa0) Create stream I0206 13:06:28.487564 8 log.go:172] (0xc0014b73f0) (0xc000942fa0) Stream added, broadcasting: 3 I0206 13:06:28.489015 8 log.go:172] (0xc0014b73f0) Reply frame received for 3 I0206 13:06:28.489043 8 log.go:172] (0xc0014b73f0) (0xc000393860) Create stream I0206 13:06:28.489055 8 log.go:172] (0xc0014b73f0) (0xc000393860) Stream added, broadcasting: 5 I0206 13:06:28.496076 8 log.go:172] (0xc0014b73f0) Reply frame received for 5 I0206 13:06:28.670279 8 log.go:172] (0xc0014b73f0) Data frame received for 3 I0206 13:06:28.670507 8 log.go:172] (0xc000942fa0) (3) Data frame handling I0206 13:06:28.670532 8 log.go:172] (0xc000942fa0) (3) Data frame sent I0206 13:06:28.817087 8 log.go:172] (0xc0014b73f0) Data frame received for 1 I0206 13:06:28.817179 8 log.go:172] (0xc0014b73f0) (0xc000942fa0) Stream removed, broadcasting: 3 I0206 13:06:28.817221 8 log.go:172] (0xc001e610e0) (1) Data frame handling I0206 13:06:28.817246 8 log.go:172] (0xc0014b73f0) (0xc000393860) Stream removed, broadcasting: 5 I0206 13:06:28.817280 8 log.go:172] (0xc001e610e0) (1) Data frame sent I0206 13:06:28.817292 8 log.go:172] (0xc0014b73f0) (0xc001e610e0) Stream removed, broadcasting: 1 I0206 13:06:28.817307 8 log.go:172] (0xc0014b73f0) Go away received I0206 13:06:28.817465 8 log.go:172] (0xc0014b73f0) (0xc001e610e0) Stream removed, broadcasting: 1 I0206 13:06:28.817499 8 log.go:172] (0xc0014b73f0) (0xc000942fa0) Stream removed, broadcasting: 3 I0206 13:06:28.817508 8 log.go:172] (0xc0014b73f0) (0xc000393860) Stream removed, broadcasting: 5 Feb 6 13:06:28.817: INFO: Exec stderr: "" Feb 6 13:06:28.817: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:28.817: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:28.885306 8 log.go:172] (0xc0014b7ce0) (0xc001e614a0) Create stream I0206 13:06:28.885393 8 log.go:172] (0xc0014b7ce0) (0xc001e614a0) Stream added, broadcasting: 1 I0206 13:06:28.891946 8 log.go:172] (0xc0014b7ce0) Reply frame received for 1 I0206 13:06:28.891984 8 log.go:172] (0xc0014b7ce0) (0xc001e61540) Create stream I0206 13:06:28.891989 8 log.go:172] (0xc0014b7ce0) (0xc001e61540) Stream added, broadcasting: 3 I0206 13:06:28.894817 8 log.go:172] (0xc0014b7ce0) Reply frame received for 3 I0206 13:06:28.894844 8 log.go:172] (0xc0014b7ce0) (0xc00175ed20) Create stream I0206 13:06:28.894850 8 log.go:172] (0xc0014b7ce0) (0xc00175ed20) Stream added, broadcasting: 5 I0206 13:06:28.899581 8 log.go:172] (0xc0014b7ce0) Reply frame received for 5 I0206 13:06:29.023473 8 log.go:172] (0xc0014b7ce0) Data frame received for 3 I0206 13:06:29.023577 8 log.go:172] (0xc001e61540) (3) Data frame handling I0206 13:06:29.023608 8 log.go:172] (0xc001e61540) (3) Data frame sent I0206 13:06:29.144378 8 log.go:172] (0xc0014b7ce0) (0xc001e61540) Stream removed, broadcasting: 3 I0206 13:06:29.144471 8 log.go:172] (0xc0014b7ce0) Data frame received for 1 I0206 13:06:29.144506 8 log.go:172] (0xc001e614a0) (1) Data frame handling I0206 13:06:29.144551 8 log.go:172] (0xc001e614a0) (1) Data frame sent I0206 13:06:29.144579 8 log.go:172] (0xc0014b7ce0) (0xc00175ed20) Stream removed, broadcasting: 5 I0206 13:06:29.144619 8 log.go:172] (0xc0014b7ce0) (0xc001e614a0) Stream removed, broadcasting: 1 I0206 13:06:29.144644 8 log.go:172] (0xc0014b7ce0) Go away received I0206 13:06:29.144747 8 log.go:172] (0xc0014b7ce0) (0xc001e614a0) Stream removed, broadcasting: 1 I0206 13:06:29.144767 8 log.go:172] (0xc0014b7ce0) (0xc001e61540) Stream removed, broadcasting: 3 I0206 13:06:29.144799 8 log.go:172] (0xc0014b7ce0) (0xc00175ed20) Stream removed, broadcasting: 5 Feb 6 13:06:29.144: INFO: Exec stderr: "" Feb 6 13:06:29.144: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:29.144: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:29.213938 8 log.go:172] (0xc0015f1d90) (0xc0016dc140) Create stream I0206 13:06:29.214005 8 log.go:172] (0xc0015f1d90) (0xc0016dc140) Stream added, broadcasting: 1 I0206 13:06:29.221023 8 log.go:172] (0xc0015f1d90) Reply frame received for 1 I0206 13:06:29.221050 8 log.go:172] (0xc0015f1d90) (0xc001e615e0) Create stream I0206 13:06:29.221059 8 log.go:172] (0xc0015f1d90) (0xc001e615e0) Stream added, broadcasting: 3 I0206 13:06:29.223303 8 log.go:172] (0xc0015f1d90) Reply frame received for 3 I0206 13:06:29.223352 8 log.go:172] (0xc0015f1d90) (0xc00175edc0) Create stream I0206 13:06:29.223366 8 log.go:172] (0xc0015f1d90) (0xc00175edc0) Stream added, broadcasting: 5 I0206 13:06:29.224766 8 log.go:172] (0xc0015f1d90) Reply frame received for 5 I0206 13:06:29.313869 8 log.go:172] (0xc0015f1d90) Data frame received for 3 I0206 13:06:29.313916 8 log.go:172] (0xc001e615e0) (3) Data frame handling I0206 13:06:29.313931 8 log.go:172] (0xc001e615e0) (3) Data frame sent I0206 13:06:29.443698 8 log.go:172] (0xc0015f1d90) Data frame received for 1 I0206 13:06:29.443725 8 log.go:172] (0xc0016dc140) (1) Data frame handling I0206 13:06:29.443732 8 log.go:172] (0xc0016dc140) (1) Data frame sent I0206 13:06:29.443741 8 log.go:172] (0xc0015f1d90) (0xc0016dc140) Stream removed, broadcasting: 1 I0206 13:06:29.444112 8 log.go:172] (0xc0015f1d90) (0xc001e615e0) Stream removed, broadcasting: 3 I0206 13:06:29.444272 8 log.go:172] (0xc0015f1d90) (0xc00175edc0) Stream removed, broadcasting: 5 I0206 13:06:29.444299 8 log.go:172] (0xc0015f1d90) (0xc0016dc140) Stream removed, broadcasting: 1 I0206 13:06:29.444305 8 log.go:172] (0xc0015f1d90) (0xc001e615e0) Stream removed, broadcasting: 3 I0206 13:06:29.444309 8 log.go:172] (0xc0015f1d90) (0xc00175edc0) Stream removed, broadcasting: 5 Feb 6 13:06:29.444: INFO: Exec stderr: "" Feb 6 13:06:29.444: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5815 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 6 13:06:29.444: INFO: >>> kubeConfig: /root/.kube/config I0206 13:06:29.446195 8 log.go:172] (0xc0015f1d90) Go away received I0206 13:06:29.494857 8 log.go:172] (0xc0022069a0) (0xc0016dc500) Create stream I0206 13:06:29.494895 8 log.go:172] (0xc0022069a0) (0xc0016dc500) Stream added, broadcasting: 1 I0206 13:06:29.502968 8 log.go:172] (0xc0022069a0) Reply frame received for 1 I0206 13:06:29.503055 8 log.go:172] (0xc0022069a0) (0xc00175f0e0) Create stream I0206 13:06:29.503063 8 log.go:172] (0xc0022069a0) (0xc00175f0e0) Stream added, broadcasting: 3 I0206 13:06:29.504377 8 log.go:172] (0xc0022069a0) Reply frame received for 3 I0206 13:06:29.504421 8 log.go:172] (0xc0022069a0) (0xc001e61680) Create stream I0206 13:06:29.504433 8 log.go:172] (0xc0022069a0) (0xc001e61680) Stream added, broadcasting: 5 I0206 13:06:29.507589 8 log.go:172] (0xc0022069a0) Reply frame received for 5 I0206 13:06:29.602851 8 log.go:172] (0xc0022069a0) Data frame received for 3 I0206 13:06:29.602910 8 log.go:172] (0xc00175f0e0) (3) Data frame handling I0206 13:06:29.602931 8 log.go:172] (0xc00175f0e0) (3) Data frame sent I0206 13:06:29.742952 8 log.go:172] (0xc0022069a0) Data frame received for 1 I0206 13:06:29.743032 8 log.go:172] (0xc0016dc500) (1) Data frame handling I0206 13:06:29.743048 8 log.go:172] (0xc0016dc500) (1) Data frame sent I0206 13:06:29.743060 8 log.go:172] (0xc0022069a0) (0xc0016dc500) Stream removed, broadcasting: 1 I0206 13:06:29.744271 8 log.go:172] (0xc0022069a0) (0xc001e61680) Stream removed, broadcasting: 5 I0206 13:06:29.744470 8 log.go:172] (0xc0022069a0) (0xc00175f0e0) Stream removed, broadcasting: 3 I0206 13:06:29.744585 8 log.go:172] (0xc0022069a0) (0xc0016dc500) Stream removed, broadcasting: 1 I0206 13:06:29.744613 8 log.go:172] (0xc0022069a0) (0xc00175f0e0) Stream removed, broadcasting: 3 I0206 13:06:29.744622 8 log.go:172] (0xc0022069a0) (0xc001e61680) Stream removed, broadcasting: 5 I0206 13:06:29.744866 8 log.go:172] (0xc0022069a0) Go away received Feb 6 13:06:29.745: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:06:29.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5815" for this suite. Feb 6 13:07:18.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:07:18.448: INFO: namespace e2e-kubelet-etc-hosts-5815 deletion completed in 48.366872343s • [SLOW TEST:80.051 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:07:18.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 6 13:07:18.603: INFO: Waiting up to 5m0s for pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671" in namespace "containers-7946" to be "success or failure" Feb 6 13:07:18.623: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 20.182643ms Feb 6 13:07:20.635: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031647204s Feb 6 13:07:22.651: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047883324s Feb 6 13:07:24.658: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055121395s Feb 6 13:07:26.683: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07991414s Feb 6 13:07:28.695: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0918746s Feb 6 13:07:30.710: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.106767688s STEP: Saw pod success Feb 6 13:07:30.710: INFO: Pod "client-containers-afbf700a-3668-4730-a6d1-04bd21073671" satisfied condition "success or failure" Feb 6 13:07:30.715: INFO: Trying to get logs from node iruya-node pod client-containers-afbf700a-3668-4730-a6d1-04bd21073671 container test-container: STEP: delete the pod Feb 6 13:07:30.982: INFO: Waiting for pod client-containers-afbf700a-3668-4730-a6d1-04bd21073671 to disappear Feb 6 13:07:30.987: INFO: Pod client-containers-afbf700a-3668-4730-a6d1-04bd21073671 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:07:30.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7946" for this suite. Feb 6 13:07:37.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:07:37.120: INFO: namespace containers-7946 deletion completed in 6.128491082s • [SLOW TEST:18.671 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:07:37.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6933 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6933 STEP: Deleting pre-stop pod Feb 6 13:08:02.402: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:08:02.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6933" for this suite. Feb 6 13:08:40.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:08:40.599: INFO: namespace prestop-6933 deletion completed in 38.176573389s • [SLOW TEST:63.479 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:08:40.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2c10444c-c73e-4b92-a2f7-4a0580afb9f3 STEP: Creating a pod to test consume secrets Feb 6 13:08:40.749: INFO: Waiting up to 5m0s for pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0" in namespace "secrets-3616" to be "success or failure" Feb 6 13:08:40.783: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.252426ms Feb 6 13:08:42.795: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045068201s Feb 6 13:08:44.804: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054668408s Feb 6 13:08:46.815: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065618067s Feb 6 13:08:48.834: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084250122s Feb 6 13:08:50.848: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.098435016s Feb 6 13:08:52.878: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.128974774s STEP: Saw pod success Feb 6 13:08:52.879: INFO: Pod "pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0" satisfied condition "success or failure" Feb 6 13:08:52.885: INFO: Trying to get logs from node iruya-node pod pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0 container secret-volume-test: STEP: delete the pod Feb 6 13:08:53.020: INFO: Waiting for pod pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0 to disappear Feb 6 13:08:53.025: INFO: Pod pod-secrets-25be8f08-92b5-47b4-a114-7c22152574c0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:08:53.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3616" for this suite. Feb 6 13:08:59.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:08:59.261: INFO: namespace secrets-3616 deletion completed in 6.232551538s • [SLOW TEST:18.662 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:08:59.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9245 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-9245 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9245 Feb 6 13:08:59.420: INFO: Found 0 stateful pods, waiting for 1 Feb 6 13:09:09.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 6 13:09:09.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 6 13:09:12.938: INFO: stderr: "I0206 13:09:12.483158 35 log.go:172] (0xc0004a2160) (0xc000774140) Create stream\nI0206 13:09:12.483315 35 log.go:172] (0xc0004a2160) (0xc000774140) Stream added, broadcasting: 1\nI0206 13:09:12.493657 35 log.go:172] (0xc0004a2160) Reply frame received for 1\nI0206 13:09:12.493744 35 log.go:172] (0xc0004a2160) (0xc0007741e0) Create stream\nI0206 13:09:12.493761 35 log.go:172] (0xc0004a2160) (0xc0007741e0) Stream added, broadcasting: 3\nI0206 13:09:12.497398 35 log.go:172] (0xc0004a2160) Reply frame received for 3\nI0206 13:09:12.497465 35 log.go:172] (0xc0004a2160) (0xc0005c4280) Create stream\nI0206 13:09:12.497497 35 log.go:172] (0xc0004a2160) (0xc0005c4280) Stream added, broadcasting: 5\nI0206 13:09:12.502189 35 log.go:172] (0xc0004a2160) Reply frame received for 5\nI0206 13:09:12.724528 35 log.go:172] (0xc0004a2160) Data frame received for 5\nI0206 13:09:12.724616 35 log.go:172] (0xc0005c4280) (5) Data frame handling\nI0206 13:09:12.724648 35 log.go:172] (0xc0005c4280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:09:12.794950 35 log.go:172] (0xc0004a2160) Data frame received for 3\nI0206 13:09:12.795028 35 log.go:172] (0xc0007741e0) (3) Data frame handling\nI0206 13:09:12.795067 35 log.go:172] (0xc0007741e0) (3) Data frame sent\nI0206 13:09:12.925384 35 log.go:172] (0xc0004a2160) Data frame received for 1\nI0206 13:09:12.925578 35 log.go:172] (0xc0004a2160) (0xc0005c4280) Stream removed, broadcasting: 5\nI0206 13:09:12.925655 35 log.go:172] (0xc000774140) (1) Data frame handling\nI0206 13:09:12.925697 35 log.go:172] (0xc000774140) (1) Data frame sent\nI0206 13:09:12.925740 35 log.go:172] (0xc0004a2160) (0xc0007741e0) Stream removed, broadcasting: 3\nI0206 13:09:12.925818 35 log.go:172] (0xc0004a2160) (0xc000774140) Stream removed, broadcasting: 1\nI0206 13:09:12.925988 35 log.go:172] (0xc0004a2160) Go away received\nI0206 13:09:12.926677 35 log.go:172] (0xc0004a2160) (0xc000774140) Stream removed, broadcasting: 1\nI0206 13:09:12.926727 35 log.go:172] (0xc0004a2160) (0xc0007741e0) Stream removed, broadcasting: 3\nI0206 13:09:12.926825 35 log.go:172] (0xc0004a2160) (0xc0005c4280) Stream removed, broadcasting: 5\n" Feb 6 13:09:12.939: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 6 13:09:12.939: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 6 13:09:12.955: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 6 13:09:22.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 6 13:09:22.968: INFO: Waiting for statefulset status.replicas updated to 0 Feb 6 13:09:23.004: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:09:23.004: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:09:23.004: INFO: Feb 6 13:09:23.004: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 6 13:09:25.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987481906s Feb 6 13:09:26.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.884249849s Feb 6 13:09:27.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.453162509s Feb 6 13:09:28.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.438727586s Feb 6 13:09:31.413: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.420417354s Feb 6 13:09:33.209: INFO: Verifying statefulset ss doesn't scale past 3 for another 578.559253ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9245 Feb 6 13:09:34.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:09:34.926: INFO: stderr: "I0206 13:09:34.422120 61 log.go:172] (0xc00067a0b0) (0xc000216960) Create stream\nI0206 13:09:34.422258 61 log.go:172] (0xc00067a0b0) (0xc000216960) Stream added, broadcasting: 1\nI0206 13:09:34.427641 61 log.go:172] (0xc00067a0b0) Reply frame received for 1\nI0206 13:09:34.427666 61 log.go:172] (0xc00067a0b0) (0xc000390000) Create stream\nI0206 13:09:34.427672 61 log.go:172] (0xc00067a0b0) (0xc000390000) Stream added, broadcasting: 3\nI0206 13:09:34.428840 61 log.go:172] (0xc00067a0b0) Reply frame received for 3\nI0206 13:09:34.428864 61 log.go:172] (0xc00067a0b0) (0xc000398000) Create stream\nI0206 13:09:34.428875 61 log.go:172] (0xc00067a0b0) (0xc000398000) Stream added, broadcasting: 5\nI0206 13:09:34.430084 61 log.go:172] (0xc00067a0b0) Reply frame received for 5\nI0206 13:09:34.781415 61 log.go:172] (0xc00067a0b0) Data frame received for 5\nI0206 13:09:34.781511 61 log.go:172] (0xc00067a0b0) Data frame received for 3\nI0206 13:09:34.781526 61 log.go:172] (0xc000390000) (3) Data frame handling\nI0206 13:09:34.781531 61 log.go:172] (0xc000390000) (3) Data frame sent\nI0206 13:09:34.781547 61 log.go:172] (0xc000398000) (5) Data frame handling\nI0206 13:09:34.781568 61 log.go:172] (0xc000398000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:09:34.920833 61 log.go:172] (0xc00067a0b0) Data frame received for 1\nI0206 13:09:34.920914 61 log.go:172] (0xc000216960) (1) Data frame handling\nI0206 13:09:34.920928 61 log.go:172] (0xc000216960) (1) Data frame sent\nI0206 13:09:34.921268 61 log.go:172] (0xc00067a0b0) (0xc000216960) Stream removed, broadcasting: 1\nI0206 13:09:34.922033 61 log.go:172] (0xc00067a0b0) (0xc000390000) Stream removed, broadcasting: 3\nI0206 13:09:34.922277 61 log.go:172] (0xc00067a0b0) (0xc000398000) Stream removed, broadcasting: 5\nI0206 13:09:34.922300 61 log.go:172] (0xc00067a0b0) (0xc000216960) Stream removed, broadcasting: 1\nI0206 13:09:34.922329 61 log.go:172] (0xc00067a0b0) (0xc000390000) Stream removed, broadcasting: 3\nI0206 13:09:34.922338 61 log.go:172] (0xc00067a0b0) (0xc000398000) Stream removed, broadcasting: 5\n" Feb 6 13:09:34.926: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 6 13:09:34.926: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 6 13:09:34.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:09:35.384: INFO: stderr: "I0206 13:09:35.047130 73 log.go:172] (0xc0008d8210) (0xc00048b040) Create stream\nI0206 13:09:35.047266 73 log.go:172] (0xc0008d8210) (0xc00048b040) Stream added, broadcasting: 1\nI0206 13:09:35.058321 73 log.go:172] (0xc0008d8210) Reply frame received for 1\nI0206 13:09:35.058430 73 log.go:172] (0xc0008d8210) (0xc00003b860) Create stream\nI0206 13:09:35.058462 73 log.go:172] (0xc0008d8210) (0xc00003b860) Stream added, broadcasting: 3\nI0206 13:09:35.060158 73 log.go:172] (0xc0008d8210) Reply frame received for 3\nI0206 13:09:35.060189 73 log.go:172] (0xc0008d8210) (0xc00048b0e0) Create stream\nI0206 13:09:35.060200 73 log.go:172] (0xc0008d8210) (0xc00048b0e0) Stream added, broadcasting: 5\nI0206 13:09:35.061161 73 log.go:172] (0xc0008d8210) Reply frame received for 5\nI0206 13:09:35.193291 73 log.go:172] (0xc0008d8210) Data frame received for 5\nI0206 13:09:35.193324 73 log.go:172] (0xc00048b0e0) (5) Data frame handling\nI0206 13:09:35.193336 73 log.go:172] (0xc00048b0e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:09:35.241402 73 log.go:172] (0xc0008d8210) Data frame received for 5\nI0206 13:09:35.241466 73 log.go:172] (0xc00048b0e0) (5) Data frame handling\nI0206 13:09:35.241482 73 log.go:172] (0xc00048b0e0) (5) Data frame sent\nI0206 13:09:35.241494 73 log.go:172] (0xc0008d8210) Data frame received for 5\nI0206 13:09:35.241503 73 log.go:172] (0xc00048b0e0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0206 13:09:35.241525 73 log.go:172] (0xc00048b0e0) (5) Data frame sent\nI0206 13:09:35.241552 73 log.go:172] (0xc0008d8210) Data frame received for 3\nI0206 13:09:35.241586 73 log.go:172] (0xc00003b860) (3) Data frame handling\nI0206 13:09:35.241606 73 log.go:172] (0xc00003b860) (3) Data frame sent\nI0206 13:09:35.378124 73 log.go:172] (0xc0008d8210) (0xc00003b860) Stream removed, broadcasting: 3\nI0206 13:09:35.378206 73 log.go:172] (0xc0008d8210) Data frame received for 1\nI0206 13:09:35.378218 73 log.go:172] (0xc0008d8210) (0xc00048b0e0) Stream removed, broadcasting: 5\nI0206 13:09:35.378245 73 log.go:172] (0xc00048b040) (1) Data frame handling\nI0206 13:09:35.378256 73 log.go:172] (0xc00048b040) (1) Data frame sent\nI0206 13:09:35.378263 73 log.go:172] (0xc0008d8210) (0xc00048b040) Stream removed, broadcasting: 1\nI0206 13:09:35.378270 73 log.go:172] (0xc0008d8210) Go away received\nI0206 13:09:35.378717 73 log.go:172] (0xc0008d8210) (0xc00048b040) Stream removed, broadcasting: 1\nI0206 13:09:35.378742 73 log.go:172] (0xc0008d8210) (0xc00003b860) Stream removed, broadcasting: 3\nI0206 13:09:35.378752 73 log.go:172] (0xc0008d8210) (0xc00048b0e0) Stream removed, broadcasting: 5\n" Feb 6 13:09:35.384: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 6 13:09:35.385: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 6 13:09:35.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:09:35.880: INFO: stderr: "I0206 13:09:35.602949 88 log.go:172] (0xc0009da420) (0xc000300780) Create stream\nI0206 13:09:35.603054 88 log.go:172] (0xc0009da420) (0xc000300780) Stream added, broadcasting: 1\nI0206 13:09:35.616024 88 log.go:172] (0xc0009da420) Reply frame received for 1\nI0206 13:09:35.616098 88 log.go:172] (0xc0009da420) (0xc000572280) Create stream\nI0206 13:09:35.616106 88 log.go:172] (0xc0009da420) (0xc000572280) Stream added, broadcasting: 3\nI0206 13:09:35.617924 88 log.go:172] (0xc0009da420) Reply frame received for 3\nI0206 13:09:35.617979 88 log.go:172] (0xc0009da420) (0xc000300000) Create stream\nI0206 13:09:35.617994 88 log.go:172] (0xc0009da420) (0xc000300000) Stream added, broadcasting: 5\nI0206 13:09:35.620381 88 log.go:172] (0xc0009da420) Reply frame received for 5\nI0206 13:09:35.719164 88 log.go:172] (0xc0009da420) Data frame received for 5\nI0206 13:09:35.719245 88 log.go:172] (0xc000300000) (5) Data frame handling\nI0206 13:09:35.719268 88 log.go:172] (0xc000300000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0206 13:09:35.719293 88 log.go:172] (0xc0009da420) Data frame received for 3\nI0206 13:09:35.719301 88 log.go:172] (0xc000572280) (3) Data frame handling\nI0206 13:09:35.719318 88 log.go:172] (0xc000572280) (3) Data frame sent\nI0206 13:09:35.722027 88 log.go:172] (0xc0009da420) Data frame received for 5\nI0206 13:09:35.722048 88 log.go:172] (0xc000300000) (5) Data frame handling\nI0206 13:09:35.722060 88 log.go:172] (0xc000300000) (5) Data frame sent\n+ true\nI0206 13:09:35.873598 88 log.go:172] (0xc0009da420) (0xc000572280) Stream removed, broadcasting: 3\nI0206 13:09:35.873771 88 log.go:172] (0xc0009da420) Data frame received for 1\nI0206 13:09:35.873798 88 log.go:172] (0xc000300780) (1) Data frame handling\nI0206 13:09:35.873816 88 log.go:172] (0xc000300780) (1) Data frame sent\nI0206 13:09:35.873862 88 log.go:172] (0xc0009da420) (0xc000300780) Stream removed, broadcasting: 1\nI0206 13:09:35.873941 88 log.go:172] (0xc0009da420) (0xc000300000) Stream removed, broadcasting: 5\nI0206 13:09:35.873982 88 log.go:172] (0xc0009da420) Go away received\nI0206 13:09:35.874389 88 log.go:172] (0xc0009da420) (0xc000300780) Stream removed, broadcasting: 1\nI0206 13:09:35.874404 88 log.go:172] (0xc0009da420) (0xc000572280) Stream removed, broadcasting: 3\nI0206 13:09:35.874415 88 log.go:172] (0xc0009da420) (0xc000300000) Stream removed, broadcasting: 5\n" Feb 6 13:09:35.880: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 6 13:09:35.880: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 6 13:09:35.890: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 6 13:09:35.890: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 6 13:09:35.890: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Feb 6 13:09:45.905: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 6 13:09:45.905: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 6 13:09:45.905: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 6 13:09:45.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 6 13:09:46.427: INFO: stderr: "I0206 13:09:46.116304 108 log.go:172] (0xc0008a2370) (0xc000718960) Create stream\nI0206 13:09:46.116427 108 log.go:172] (0xc0008a2370) (0xc000718960) Stream added, broadcasting: 1\nI0206 13:09:46.119659 108 log.go:172] (0xc0008a2370) Reply frame received for 1\nI0206 13:09:46.119689 108 log.go:172] (0xc0008a2370) (0xc0007d4460) Create stream\nI0206 13:09:46.119697 108 log.go:172] (0xc0008a2370) (0xc0007d4460) Stream added, broadcasting: 3\nI0206 13:09:46.120616 108 log.go:172] (0xc0008a2370) Reply frame received for 3\nI0206 13:09:46.120636 108 log.go:172] (0xc0008a2370) (0xc000718a00) Create stream\nI0206 13:09:46.120643 108 log.go:172] (0xc0008a2370) (0xc000718a00) Stream added, broadcasting: 5\nI0206 13:09:46.121918 108 log.go:172] (0xc0008a2370) Reply frame received for 5\nI0206 13:09:46.252466 108 log.go:172] (0xc0008a2370) Data frame received for 5\nI0206 13:09:46.252508 108 log.go:172] (0xc000718a00) (5) Data frame handling\nI0206 13:09:46.252532 108 log.go:172] (0xc000718a00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:09:46.254789 108 log.go:172] (0xc0008a2370) Data frame received for 3\nI0206 13:09:46.254820 108 log.go:172] (0xc0007d4460) (3) Data frame handling\nI0206 13:09:46.254850 108 log.go:172] (0xc0007d4460) (3) Data frame sent\nI0206 13:09:46.417214 108 log.go:172] (0xc0008a2370) (0xc0007d4460) Stream removed, broadcasting: 3\nI0206 13:09:46.417318 108 log.go:172] (0xc0008a2370) Data frame received for 1\nI0206 13:09:46.417355 108 log.go:172] (0xc000718960) (1) Data frame handling\nI0206 13:09:46.417383 108 log.go:172] (0xc000718960) (1) Data frame sent\nI0206 13:09:46.417445 108 log.go:172] (0xc0008a2370) (0xc000718960) Stream removed, broadcasting: 1\nI0206 13:09:46.417481 108 log.go:172] (0xc0008a2370) (0xc000718a00) Stream removed, broadcasting: 5\nI0206 13:09:46.417520 108 log.go:172] (0xc0008a2370) Go away received\nI0206 13:09:46.418438 108 log.go:172] (0xc0008a2370) (0xc000718960) Stream removed, broadcasting: 1\nI0206 13:09:46.418461 108 log.go:172] (0xc0008a2370) (0xc0007d4460) Stream removed, broadcasting: 3\nI0206 13:09:46.418470 108 log.go:172] (0xc0008a2370) (0xc000718a00) Stream removed, broadcasting: 5\n" Feb 6 13:09:46.427: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 6 13:09:46.427: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 6 13:09:46.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 6 13:09:46.795: INFO: stderr: "I0206 13:09:46.568466 129 log.go:172] (0xc000950a50) (0xc00094ce60) Create stream\nI0206 13:09:46.568608 129 log.go:172] (0xc000950a50) (0xc00094ce60) Stream added, broadcasting: 1\nI0206 13:09:46.589578 129 log.go:172] (0xc000950a50) Reply frame received for 1\nI0206 13:09:46.589618 129 log.go:172] (0xc000950a50) (0xc00094c000) Create stream\nI0206 13:09:46.589625 129 log.go:172] (0xc000950a50) (0xc00094c000) Stream added, broadcasting: 3\nI0206 13:09:46.590519 129 log.go:172] (0xc000950a50) Reply frame received for 3\nI0206 13:09:46.590541 129 log.go:172] (0xc000950a50) (0xc00003abe0) Create stream\nI0206 13:09:46.590572 129 log.go:172] (0xc000950a50) (0xc00003abe0) Stream added, broadcasting: 5\nI0206 13:09:46.591457 129 log.go:172] (0xc000950a50) Reply frame received for 5\nI0206 13:09:46.657081 129 log.go:172] (0xc000950a50) Data frame received for 5\nI0206 13:09:46.657139 129 log.go:172] (0xc00003abe0) (5) Data frame handling\nI0206 13:09:46.657162 129 log.go:172] (0xc00003abe0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:09:46.691806 129 log.go:172] (0xc000950a50) Data frame received for 3\nI0206 13:09:46.691826 129 log.go:172] (0xc00094c000) (3) Data frame handling\nI0206 13:09:46.691834 129 log.go:172] (0xc00094c000) (3) Data frame sent\nI0206 13:09:46.790023 129 log.go:172] (0xc000950a50) Data frame received for 1\nI0206 13:09:46.790109 129 log.go:172] (0xc00094ce60) (1) Data frame handling\nI0206 13:09:46.790125 129 log.go:172] (0xc00094ce60) (1) Data frame sent\nI0206 13:09:46.790309 129 log.go:172] (0xc000950a50) (0xc00003abe0) Stream removed, broadcasting: 5\nI0206 13:09:46.790440 129 log.go:172] (0xc000950a50) (0xc00094ce60) Stream removed, broadcasting: 1\nI0206 13:09:46.790626 129 log.go:172] (0xc000950a50) (0xc00094c000) Stream removed, broadcasting: 3\nI0206 13:09:46.790727 129 log.go:172] (0xc000950a50) (0xc00094ce60) Stream removed, broadcasting: 1\nI0206 13:09:46.790776 129 log.go:172] (0xc000950a50) (0xc00094c000) Stream removed, broadcasting: 3\nI0206 13:09:46.790817 129 log.go:172] (0xc000950a50) (0xc00003abe0) Stream removed, broadcasting: 5\nI0206 13:09:46.790853 129 log.go:172] (0xc000950a50) Go away received\n" Feb 6 13:09:46.795: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 6 13:09:46.795: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 6 13:09:46.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 6 13:09:47.264: INFO: stderr: "I0206 13:09:46.965640 145 log.go:172] (0xc0007f4160) (0xc0009205a0) Create stream\nI0206 13:09:46.965766 145 log.go:172] (0xc0007f4160) (0xc0009205a0) Stream added, broadcasting: 1\nI0206 13:09:46.970900 145 log.go:172] (0xc0007f4160) Reply frame received for 1\nI0206 13:09:46.970931 145 log.go:172] (0xc0007f4160) (0xc000596280) Create stream\nI0206 13:09:46.970941 145 log.go:172] (0xc0007f4160) (0xc000596280) Stream added, broadcasting: 3\nI0206 13:09:46.972250 145 log.go:172] (0xc0007f4160) Reply frame received for 3\nI0206 13:09:46.972290 145 log.go:172] (0xc0007f4160) (0xc000340000) Create stream\nI0206 13:09:46.972304 145 log.go:172] (0xc0007f4160) (0xc000340000) Stream added, broadcasting: 5\nI0206 13:09:46.973550 145 log.go:172] (0xc0007f4160) Reply frame received for 5\nI0206 13:09:47.105477 145 log.go:172] (0xc0007f4160) Data frame received for 5\nI0206 13:09:47.105508 145 log.go:172] (0xc000340000) (5) Data frame handling\nI0206 13:09:47.105523 145 log.go:172] (0xc000340000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:09:47.124240 145 log.go:172] (0xc0007f4160) Data frame received for 3\nI0206 13:09:47.124258 145 log.go:172] (0xc000596280) (3) Data frame handling\nI0206 13:09:47.124281 145 log.go:172] (0xc000596280) (3) Data frame sent\nI0206 13:09:47.251328 145 log.go:172] (0xc0007f4160) Data frame received for 1\nI0206 13:09:47.251441 145 log.go:172] (0xc0009205a0) (1) Data frame handling\nI0206 13:09:47.251474 145 log.go:172] (0xc0009205a0) (1) Data frame sent\nI0206 13:09:47.251972 145 log.go:172] (0xc0007f4160) (0xc0009205a0) Stream removed, broadcasting: 1\nI0206 13:09:47.252146 145 log.go:172] (0xc0007f4160) (0xc000596280) Stream removed, broadcasting: 3\nI0206 13:09:47.252305 145 log.go:172] (0xc0007f4160) (0xc000340000) Stream removed, broadcasting: 5\nI0206 13:09:47.252425 145 log.go:172] (0xc0007f4160) Go away received\nI0206 13:09:47.252519 145 log.go:172] (0xc0007f4160) (0xc0009205a0) Stream removed, broadcasting: 1\nI0206 13:09:47.252537 145 log.go:172] (0xc0007f4160) (0xc000596280) Stream removed, broadcasting: 3\nI0206 13:09:47.252541 145 log.go:172] (0xc0007f4160) (0xc000340000) Stream removed, broadcasting: 5\n" Feb 6 13:09:47.264: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 6 13:09:47.264: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 6 13:09:47.264: INFO: Waiting for statefulset status.replicas updated to 0 Feb 6 13:09:47.295: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Feb 6 13:09:57.310: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 6 13:09:57.310: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 6 13:09:57.310: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 6 13:09:57.389: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:09:57.389: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:09:57.389: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:09:57.389: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:09:57.389: INFO: Feb 6 13:09:57.389: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:09:59.201: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:09:59.201: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:09:59.201: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:09:59.201: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:09:59.201: INFO: Feb 6 13:09:59.201: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:00.771: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:00.771: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:00.771: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:00.771: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:00.771: INFO: Feb 6 13:10:00.771: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:01.993: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:01.993: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:01.993: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:01.993: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:01.993: INFO: Feb 6 13:10:01.993: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:03.931: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:03.931: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:03.931: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:03.931: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:03.931: INFO: Feb 6 13:10:03.931: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:04.940: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:04.940: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:04.940: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:04.940: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:04.940: INFO: Feb 6 13:10:04.940: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:05.951: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:05.951: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:05.951: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:05.951: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:05.951: INFO: Feb 6 13:10:05.951: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 6 13:10:06.962: INFO: POD NODE PHASE GRACE CONDITIONS Feb 6 13:10:06.962: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:08:59 +0000 UTC }] Feb 6 13:10:06.962: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:06.962: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:09:23 +0000 UTC }] Feb 6 13:10:06.962: INFO: Feb 6 13:10:06.962: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9245 Feb 6 13:10:07.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:08.219: INFO: rc: 1 Feb 6 13:10:08.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ab5b30 exit status 1 true [0xc000010e60 0xc000010eb0 0xc000010ef0] [0xc000010e60 0xc000010eb0 0xc000010ef0] [0xc000010e98 0xc000010ed0] [0xba6c50 0xba6c50] 0xc002089bc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 6 13:10:18.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:18.345: INFO: rc: 1 Feb 6 13:10:18.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dcc4b0 exit status 1 true [0xc00117b128 0xc00117b230 0xc00117b388] [0xc00117b128 0xc00117b230 0xc00117b388] [0xc00117b210 0xc00117b2f8] [0xba6c50 0xba6c50] 0xc001591200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:10:28.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:28.492: INFO: rc: 1 Feb 6 13:10:28.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d5e090 exit status 1 true [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e030 0xc00219e048] [0xba6c50 0xba6c50] 0xc002eba480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:10:38.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:38.619: INFO: rc: 1 Feb 6 13:10:38.620: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab5c20 exit status 1 true [0xc000010f10 0xc000010fa8 0xc000010ff8] [0xc000010f10 0xc000010fa8 0xc000010ff8] [0xc000010f88 0xc000010fd8] [0xba6c50 0xba6c50] 0xc002089ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:10:48.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:48.727: INFO: rc: 1 Feb 6 13:10:48.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e7170 exit status 1 true [0xc0005215a8 0xc0005215e8 0xc000521698] [0xc0005215a8 0xc0005215e8 0xc000521698] [0xc0005215d0 0xc000521680] [0xba6c50 0xba6c50] 0xc002e22720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:10:58.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:10:58.893: INFO: rc: 1 Feb 6 13:10:58.893: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab5ce0 exit status 1 true [0xc000011028 0xc000011068 0xc0000110b8] [0xc000011028 0xc000011068 0xc0000110b8] [0xc000011040 0xc0000110a8] [0xba6c50 0xba6c50] 0xc002cb02a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:08.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:09.058: INFO: rc: 1 Feb 6 13:11:09.058: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e7290 exit status 1 true [0xc0005216a8 0xc000521728 0xc0005217a8] [0xc0005216a8 0xc000521728 0xc0005217a8] [0xc0005216f0 0xc000521740] [0xba6c50 0xba6c50] 0xc002e22a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:19.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:19.216: INFO: rc: 1 Feb 6 13:11:19.216: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e7350 exit status 1 true [0xc000521820 0xc0005218c0 0xc0005219f8] [0xc000521820 0xc0005218c0 0xc0005219f8] [0xc0005218a8 0xc0005219c8] [0xba6c50 0xba6c50] 0xc002e22d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:29.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:29.368: INFO: rc: 1 Feb 6 13:11:29.368: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001dcc630 exit status 1 true [0xc00117b430 0xc00117b550 0xc00117b788] [0xc00117b430 0xc00117b550 0xc00117b788] [0xc00117b520 0xc00117b6c8] [0xba6c50 0xba6c50] 0xc0015916e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:39.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:39.492: INFO: rc: 1 Feb 6 13:11:39.492: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002fbc030 exit status 1 true [0xc0005204e0 0xc000520ea8 0xc0005212a0] [0xc0005204e0 0xc000520ea8 0xc0005212a0] [0xc000520dd8 0xc0005210a0] [0xba6c50 0xba6c50] 0xc002088d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:49.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:49.672: INFO: rc: 1 Feb 6 13:11:49.672: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24090 exit status 1 true [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e030 0xc00219e048] [0xba6c50 0xba6c50] 0xc002ed2540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:11:59.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:11:59.845: INFO: rc: 1 Feb 6 13:11:59.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002fbc120 exit status 1 true [0xc000521448 0xc0005215c8 0xc000521620] [0xc000521448 0xc0005215c8 0xc000521620] [0xc0005215a8 0xc0005215e8] [0xba6c50 0xba6c50] 0xc0020891a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:12:09.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:12:09.976: INFO: rc: 1 Feb 6 13:12:09.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a241e0 exit status 1 true [0xc00219e058 0xc00219e070 0xc00219e0a0] [0xc00219e058 0xc00219e070 0xc00219e0a0] [0xc00219e068 0xc00219e098] [0xba6c50 0xba6c50] 0xc002ed3bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:12:19.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:12:20.078: INFO: rc: 1 Feb 6 13:12:20.078: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a242a0 exit status 1 true [0xc00219e0b8 0xc00219e110 0xc00219e128] [0xc00219e0b8 0xc00219e110 0xc00219e128] [0xc00219e0f8 0xc00219e120] [0xba6c50 0xba6c50] 0xc001dad680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:12:30.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:12:30.192: INFO: rc: 1 Feb 6 13:12:30.192: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e6090 exit status 1 true [0xc00117a1f0 0xc00117a2f8 0xc00117a668] [0xc00117a1f0 0xc00117a2f8 0xc00117a668] [0xc00117a2b8 0xc00117a358] [0xba6c50 0xba6c50] 0xc002e22240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:12:40.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:12:40.314: INFO: rc: 1 Feb 6 13:12:40.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e6180 exit status 1 true [0xc00117aa20 0xc00117abb0 0xc00117ad78] [0xc00117aa20 0xc00117abb0 0xc00117ad78] [0xc00117ab00 0xc00117ad18] [0xba6c50 0xba6c50] 0xc002e22600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:12:50.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:12:50.457: INFO: rc: 1 Feb 6 13:12:50.457: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24390 exit status 1 true [0xc00219e130 0xc00219e148 0xc00219e168] [0xc00219e130 0xc00219e148 0xc00219e168] [0xc00219e140 0xc00219e160] [0xba6c50 0xba6c50] 0xc002eba480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:00.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:00.600: INFO: rc: 1 Feb 6 13:13:00.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d5e0c0 exit status 1 true [0xc000010010 0xc000010148 0xc000010230] [0xc000010010 0xc000010148 0xc000010230] [0xc000010078 0xc000010200] [0xba6c50 0xba6c50] 0xc001590240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:10.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:10.730: INFO: rc: 1 Feb 6 13:13:10.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24480 exit status 1 true [0xc00219e170 0xc00219e188 0xc00219e1a0] [0xc00219e170 0xc00219e188 0xc00219e1a0] [0xc00219e180 0xc00219e198] [0xba6c50 0xba6c50] 0xc002ebaa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:20.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:20.823: INFO: rc: 1 Feb 6 13:13:20.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002fbc240 exit status 1 true [0xc000521680 0xc0005216b0 0xc000521730] [0xc000521680 0xc0005216b0 0xc000521730] [0xc0005216a8 0xc000521728] [0xba6c50 0xba6c50] 0xc0020894a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:30.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:30.933: INFO: rc: 1 Feb 6 13:13:30.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24750 exit status 1 true [0xc00219e1a8 0xc00219e1c0 0xc00219e1d8] [0xc00219e1a8 0xc00219e1c0 0xc00219e1d8] [0xc00219e1b8 0xc00219e1d0] [0xba6c50 0xba6c50] 0xc002ebb080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:40.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:41.084: INFO: rc: 1 Feb 6 13:13:41.084: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d5e1b0 exit status 1 true [0xc000010338 0xc0000103f8 0xc000010528] [0xc000010338 0xc0000103f8 0xc000010528] [0xc0000103a8 0xc0000104b0] [0xba6c50 0xba6c50] 0xc0015905a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:13:51.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:13:51.185: INFO: rc: 1 Feb 6 13:13:51.185: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e60c0 exit status 1 true [0xc00117a1f0 0xc00117a2f8 0xc00117a668] [0xc00117a1f0 0xc00117a2f8 0xc00117a668] [0xc00117a2b8 0xc00117a358] [0xba6c50 0xba6c50] 0xc002ed23c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:01.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:01.334: INFO: rc: 1 Feb 6 13:14:01.334: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e61e0 exit status 1 true [0xc00117a928 0xc00117ab00 0xc00117ad18] [0xc00117a928 0xc00117ab00 0xc00117ad18] [0xc00117aaf8 0xc00117ac48] [0xba6c50 0xba6c50] 0xc002ed3b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:11.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:11.464: INFO: rc: 1 Feb 6 13:14:11.464: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24120 exit status 1 true [0xc000010010 0xc000010148 0xc000010230] [0xc000010010 0xc000010148 0xc000010230] [0xc000010078 0xc000010200] [0xba6c50 0xba6c50] 0xc002e22240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:21.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:21.641: INFO: rc: 1 Feb 6 13:14:21.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e62d0 exit status 1 true [0xc00117ad78 0xc00117ae78 0xc00117aed0] [0xc00117ad78 0xc00117ae78 0xc00117aed0] [0xc00117ae40 0xc00117aeb0] [0xba6c50 0xba6c50] 0xc002ed3e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:31.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:31.805: INFO: rc: 1 Feb 6 13:14:31.805: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021e63f0 exit status 1 true [0xc00117af68 0xc00117b0d8 0xc00117b210] [0xc00117af68 0xc00117b0d8 0xc00117b210] [0xc00117b058 0xc00117b1f8] [0xba6c50 0xba6c50] 0xc001590240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:41.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:41.955: INFO: rc: 1 Feb 6 13:14:41.955: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d5e210 exit status 1 true [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e000 0xc00219e038 0xc00219e050] [0xc00219e030 0xc00219e048] [0xba6c50 0xba6c50] 0xc002eba480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:14:51.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:14:52.113: INFO: rc: 1 Feb 6 13:14:52.113: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a24210 exit status 1 true [0xc000010580 0xc0000106f8 0xc0000107b8] [0xc000010580 0xc0000106f8 0xc0000107b8] [0xc000010668 0xc000010768] [0xba6c50 0xba6c50] 0xc002e22600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:15:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:15:02.248: INFO: rc: 1 Feb 6 13:15:02.248: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002d5e330 exit status 1 true [0xc00219e058 0xc00219e070 0xc00219e0a0] [0xc00219e058 0xc00219e070 0xc00219e0a0] [0xc00219e068 0xc00219e098] [0xba6c50 0xba6c50] 0xc002ebaa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 6 13:15:12.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9245 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 6 13:15:12.379: INFO: rc: 1 Feb 6 13:15:12.379: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 6 13:15:12.379: INFO: Scaling statefulset ss to 0 Feb 6 13:15:12.390: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 6 13:15:12.392: INFO: Deleting all statefulset in ns statefulset-9245 Feb 6 13:15:12.394: INFO: Scaling statefulset ss to 0 Feb 6 13:15:12.412: INFO: Waiting for statefulset status.replicas updated to 0 Feb 6 13:15:12.414: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:15:12.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9245" for this suite. Feb 6 13:15:18.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:15:18.619: INFO: namespace statefulset-9245 deletion completed in 6.162854224s • [SLOW TEST:379.357 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:15:18.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a7966ef5-1b5b-44d2-965a-27430f595019 STEP: Creating a pod to test consume configMaps Feb 6 13:15:18.699: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4" in namespace "configmap-1584" to be "success or failure" Feb 6 13:15:18.774: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Pending", Reason="", readiness=false. Elapsed: 74.794521ms Feb 6 13:15:20.783: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083890271s Feb 6 13:15:22.797: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097561483s Feb 6 13:15:24.805: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106177804s Feb 6 13:15:26.821: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121482205s Feb 6 13:15:28.830: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.130634037s STEP: Saw pod success Feb 6 13:15:28.830: INFO: Pod "pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4" satisfied condition "success or failure" Feb 6 13:15:28.835: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4 container configmap-volume-test: STEP: delete the pod Feb 6 13:15:28.930: INFO: Waiting for pod pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4 to disappear Feb 6 13:15:28.951: INFO: Pod pod-configmaps-b7a10f58-e81a-4308-83cb-389af6f9faa4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:15:28.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1584" for this suite. Feb 6 13:15:35.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:15:35.216: INFO: namespace configmap-1584 deletion completed in 6.25969694s • [SLOW TEST:16.597 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:15:35.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Feb 6 13:15:35.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 6 13:15:35.436: INFO: stderr: "" Feb 6 13:15:35.436: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:15:35.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4303" for this suite. Feb 6 13:15:41.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:15:41.612: INFO: namespace kubectl-4303 deletion completed in 6.154170317s • [SLOW TEST:6.396 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:15:41.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:16:14.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2060" for this suite. Feb 6 13:16:20.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:16:20.163: INFO: namespace namespaces-2060 deletion completed in 6.140488465s STEP: Destroying namespace "nsdeletetest-9546" for this suite. Feb 6 13:16:20.166: INFO: Namespace nsdeletetest-9546 was already deleted STEP: Destroying namespace "nsdeletetest-5541" for this suite. Feb 6 13:16:26.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:16:26.308: INFO: namespace nsdeletetest-5541 deletion completed in 6.141695935s • [SLOW TEST:44.695 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:16:26.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 6 13:16:26.466: INFO: Waiting up to 5m0s for pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528" in namespace "containers-9170" to be "success or failure" Feb 6 13:16:26.471: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Pending", Reason="", readiness=false. Elapsed: 5.453935ms Feb 6 13:16:28.700: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234124553s Feb 6 13:16:30.708: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242675046s Feb 6 13:16:32.715: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249644388s Feb 6 13:16:34.721: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255838185s Feb 6 13:16:36.728: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.26254432s STEP: Saw pod success Feb 6 13:16:36.728: INFO: Pod "client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528" satisfied condition "success or failure" Feb 6 13:16:36.731: INFO: Trying to get logs from node iruya-node pod client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528 container test-container: STEP: delete the pod Feb 6 13:16:36.777: INFO: Waiting for pod client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528 to disappear Feb 6 13:16:36.823: INFO: Pod client-containers-b470f629-d3ae-42e4-a074-3db7c38ab528 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:16:36.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9170" for this suite. Feb 6 13:16:42.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:16:42.977: INFO: namespace containers-9170 deletion completed in 6.147401598s • [SLOW TEST:16.667 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:16:42.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-277bf134-764a-4419-a9cc-b90679406634 STEP: Creating secret with name s-test-opt-upd-da936ef0-78b1-404f-b1ea-ca12783fda67 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-277bf134-764a-4419-a9cc-b90679406634 STEP: Updating secret s-test-opt-upd-da936ef0-78b1-404f-b1ea-ca12783fda67 STEP: Creating secret with name s-test-opt-create-b5c6966b-d404-4734-9716-aa218d3dcac3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:18:21.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3572" for this suite. Feb 6 13:18:43.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:18:43.929: INFO: namespace secrets-3572 deletion completed in 22.148930038s • [SLOW TEST:120.952 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:18:43.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 6 13:18:54.605: INFO: Successfully updated pod "annotationupdatef613e48c-8fd1-4095-ae77-bfa582b5822f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:18:56.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5957" for this suite. Feb 6 13:19:18.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:19:18.900: INFO: namespace downward-api-5957 deletion completed in 22.205570936s • [SLOW TEST:34.970 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:19:18.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 6 13:19:19.016: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:19:29.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2847" for this suite. Feb 6 13:20:11.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:20:11.290: INFO: namespace pods-2847 deletion completed in 42.197300875s • [SLOW TEST:52.390 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:20:11.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:20:23.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4104" for this suite. Feb 6 13:20:29.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:20:29.612: INFO: namespace kubelet-test-4104 deletion completed in 6.174324317s • [SLOW TEST:18.320 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:20:29.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b297ab5e-3f09-4c64-a73e-62b933222a0a STEP: Creating a pod to test consume configMaps Feb 6 13:20:29.791: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26" in namespace "projected-1912" to be "success or failure" Feb 6 13:20:29.816: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Pending", Reason="", readiness=false. Elapsed: 24.744847ms Feb 6 13:20:31.824: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033396725s Feb 6 13:20:33.836: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045244382s Feb 6 13:20:35.852: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061454821s Feb 6 13:20:37.864: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073291248s Feb 6 13:20:39.875: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083969815s STEP: Saw pod success Feb 6 13:20:39.875: INFO: Pod "pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26" satisfied condition "success or failure" Feb 6 13:20:39.879: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26 container projected-configmap-volume-test: STEP: delete the pod Feb 6 13:20:40.006: INFO: Waiting for pod pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26 to disappear Feb 6 13:20:40.016: INFO: Pod pod-projected-configmaps-7e8bee99-115b-4ca1-a052-13420c863c26 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 6 13:20:40.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1912" for this suite. Feb 6 13:20:46.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 6 13:20:46.128: INFO: namespace projected-1912 deletion completed in 6.103327031s • [SLOW TEST:16.516 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 6 13:20:46.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 6 13:20:46.208: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.416794ms)
Feb  6 13:20:46.214: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.238087ms)
Feb  6 13:20:46.222: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.6189ms)
Feb  6 13:20:46.226: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.47463ms)
Feb  6 13:20:46.232: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.373207ms)
Feb  6 13:20:46.236: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.359325ms)
Feb  6 13:20:46.267: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 30.511203ms)
Feb  6 13:20:46.277: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.749039ms)
Feb  6 13:20:46.284: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.290125ms)
Feb  6 13:20:46.290: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.946262ms)
Feb  6 13:20:46.296: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.422526ms)
Feb  6 13:20:46.303: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.030073ms)
Feb  6 13:20:46.314: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.494495ms)
Feb  6 13:20:46.321: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.047793ms)
Feb  6 13:20:46.330: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.404488ms)
Feb  6 13:20:46.339: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.762235ms)
Feb  6 13:20:46.344: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.45025ms)
Feb  6 13:20:46.350: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.842954ms)
Feb  6 13:20:46.358: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.662754ms)
Feb  6 13:20:46.368: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.779016ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:20:46.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6196" for this suite.
Feb  6 13:20:52.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:20:52.538: INFO: namespace proxy-6196 deletion completed in 6.165308393s

• [SLOW TEST:6.410 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:20:52.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-e3d8efd9-e7c6-406a-8cf1-5460167fc815
STEP: Creating a pod to test consume secrets
Feb  6 13:20:52.703: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d" in namespace "projected-882" to be "success or failure"
Feb  6 13:20:52.730: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Pending", Reason="", readiness=false. Elapsed: 27.758749ms
Feb  6 13:20:54.737: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034689622s
Feb  6 13:20:56.742: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03906191s
Feb  6 13:20:58.752: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049803331s
Feb  6 13:21:00.760: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057881162s
Feb  6 13:21:02.771: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06847404s
STEP: Saw pod success
Feb  6 13:21:02.771: INFO: Pod "pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d" satisfied condition "success or failure"
Feb  6 13:21:02.779: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 13:21:02.868: INFO: Waiting for pod pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d to disappear
Feb  6 13:21:02.880: INFO: Pod pod-projected-secrets-204bdce0-92a9-46ca-9fbc-7b687290576d no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:21:02.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-882" for this suite.
Feb  6 13:21:09.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:21:09.327: INFO: namespace projected-882 deletion completed in 6.44215683s

• [SLOW TEST:16.788 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:21:09.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5993/secret-test-841cfce9-4094-429b-a003-fcc3201c149d
STEP: Creating a pod to test consume secrets
Feb  6 13:21:09.466: INFO: Waiting up to 5m0s for pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef" in namespace "secrets-5993" to be "success or failure"
Feb  6 13:21:09.494: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Pending", Reason="", readiness=false. Elapsed: 27.022801ms
Feb  6 13:21:11.504: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037799183s
Feb  6 13:21:13.516: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049496697s
Feb  6 13:21:15.525: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058358464s
Feb  6 13:21:17.531: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064546509s
Feb  6 13:21:19.541: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074266821s
STEP: Saw pod success
Feb  6 13:21:19.541: INFO: Pod "pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef" satisfied condition "success or failure"
Feb  6 13:21:19.547: INFO: Trying to get logs from node iruya-node pod pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef container env-test: 
STEP: delete the pod
Feb  6 13:21:19.603: INFO: Waiting for pod pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef to disappear
Feb  6 13:21:19.623: INFO: Pod pod-configmaps-a49aee86-c735-40ed-86d5-f2ae751efbef no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:21:19.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5993" for this suite.
Feb  6 13:21:25.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:21:25.884: INFO: namespace secrets-5993 deletion completed in 6.252678859s

• [SLOW TEST:16.556 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:21:25.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-ee6f9d19-9c35-4b39-87fa-eed883c85865
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:21:26.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8623" for this suite.
Feb  6 13:21:32.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:21:32.158: INFO: namespace secrets-8623 deletion completed in 6.125465286s

• [SLOW TEST:6.274 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:21:32.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-4f980c1b-db63-4f8b-8df6-da0e4e5b4eea
STEP: Creating a pod to test consume configMaps
Feb  6 13:21:32.264: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219" in namespace "projected-3044" to be "success or failure"
Feb  6 13:21:32.269: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Pending", Reason="", readiness=false. Elapsed: 5.511074ms
Feb  6 13:21:34.280: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016585304s
Feb  6 13:21:36.290: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025996569s
Feb  6 13:21:38.295: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031706243s
Feb  6 13:21:40.302: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038175145s
Feb  6 13:21:42.316: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052140595s
STEP: Saw pod success
Feb  6 13:21:42.316: INFO: Pod "pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219" satisfied condition "success or failure"
Feb  6 13:21:42.320: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 13:21:42.361: INFO: Waiting for pod pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219 to disappear
Feb  6 13:21:42.367: INFO: Pod pod-projected-configmaps-83a86c78-c658-4f9a-ab17-fd5166657219 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:21:42.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3044" for this suite.
Feb  6 13:21:48.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:21:48.540: INFO: namespace projected-3044 deletion completed in 6.166367909s

• [SLOW TEST:16.382 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:21:48.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:21:48.622: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  6 13:21:48.664: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  6 13:21:53.673: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  6 13:21:55.688: INFO: Creating deployment "test-rolling-update-deployment"
Feb  6 13:21:55.701: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  6 13:21:55.716: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  6 13:21:57.733: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  6 13:21:57.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 13:21:59.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 13:22:01.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 13:22:03.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716592115, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 13:22:05.742: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  6 13:22:05.753: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8063,SelfLink:/apis/apps/v1/namespaces/deployment-8063/deployments/test-rolling-update-deployment,UID:5b4dfc66-7876-4f8e-82ac-1ae07397b245,ResourceVersion:23317667,Generation:1,CreationTimestamp:2020-02-06 13:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-06 13:21:55 +0000 UTC 2020-02-06 13:21:55 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-06 13:22:04 +0000 UTC 2020-02-06 13:21:55 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  6 13:22:05.759: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8063,SelfLink:/apis/apps/v1/namespaces/deployment-8063/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:2efb424f-4f19-4e0e-84e6-791efa3e65e8,ResourceVersion:23317656,Generation:1,CreationTimestamp:2020-02-06 13:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5b4dfc66-7876-4f8e-82ac-1ae07397b245 0xc001e3a607 0xc001e3a608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  6 13:22:05.759: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  6 13:22:05.759: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8063,SelfLink:/apis/apps/v1/namespaces/deployment-8063/replicasets/test-rolling-update-controller,UID:9d9d40b4-826a-4760-8a17-8c3ff72737a1,ResourceVersion:23317666,Generation:2,CreationTimestamp:2020-02-06 13:21:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 5b4dfc66-7876-4f8e-82ac-1ae07397b245 0xc001e3a437 0xc001e3a438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 13:22:05.762: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-2sxjm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-2sxjm,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8063,SelfLink:/api/v1/namespaces/deployment-8063/pods/test-rolling-update-deployment-79f6b9d75c-2sxjm,UID:32edf7bd-d54b-4375-9853-1383859061dc,ResourceVersion:23317655,Generation:0,CreationTimestamp:2020-02-06 13:21:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 2efb424f-4f19-4e0e-84e6-791efa3e65e8 0xc001e3b527 0xc001e3b528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v8xt7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v8xt7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-v8xt7 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:21:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:22:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:22:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 13:21:55 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-06 13:21:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-06 13:22:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://7a3eac6b57e37ded790d406dbc726df4048113e2775b0716bec259030882c0ee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:22:05.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8063" for this suite.
Feb  6 13:22:15.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:22:15.951: INFO: namespace deployment-8063 deletion completed in 10.185361657s

• [SLOW TEST:27.409 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:22:15.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:22:16.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8" in namespace "downward-api-168" to be "success or failure"
Feb  6 13:22:16.094: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257387ms
Feb  6 13:22:18.102: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014629063s
Feb  6 13:22:20.109: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021332169s
Feb  6 13:22:22.114: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026862236s
Feb  6 13:22:24.123: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.035609304s
Feb  6 13:22:26.134: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046462891s
STEP: Saw pod success
Feb  6 13:22:26.134: INFO: Pod "downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8" satisfied condition "success or failure"
Feb  6 13:22:26.139: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8 container client-container: 
STEP: delete the pod
Feb  6 13:22:26.306: INFO: Waiting for pod downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8 to disappear
Feb  6 13:22:26.316: INFO: Pod downwardapi-volume-9c9d0afd-3ba7-4428-8e06-f873d19386d8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:22:26.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-168" for this suite.
Feb  6 13:22:32.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:22:32.518: INFO: namespace downward-api-168 deletion completed in 6.192153183s

• [SLOW TEST:16.567 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:22:32.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  6 13:22:43.186: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4005 pod-service-account-436de8b4-ace4-43d7-b25f-af92be5acc17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  6 13:22:46.018: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4005 pod-service-account-436de8b4-ace4-43d7-b25f-af92be5acc17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  6 13:22:46.433: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4005 pod-service-account-436de8b4-ace4-43d7-b25f-af92be5acc17 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:22:46.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4005" for this suite.
Feb  6 13:22:52.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:22:53.107: INFO: namespace svcaccounts-4005 deletion completed in 6.192837525s

• [SLOW TEST:20.589 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:22:53.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  6 13:22:53.243: INFO: Waiting up to 5m0s for pod "pod-38454713-61ec-4109-b03b-69a688c2836f" in namespace "emptydir-3731" to be "success or failure"
Feb  6 13:22:53.251: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.373056ms
Feb  6 13:22:55.260: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017065255s
Feb  6 13:22:57.272: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028628363s
Feb  6 13:22:59.281: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037315287s
Feb  6 13:23:01.300: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056920476s
Feb  6 13:23:03.311: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067599685s
STEP: Saw pod success
Feb  6 13:23:03.311: INFO: Pod "pod-38454713-61ec-4109-b03b-69a688c2836f" satisfied condition "success or failure"
Feb  6 13:23:03.314: INFO: Trying to get logs from node iruya-node pod pod-38454713-61ec-4109-b03b-69a688c2836f container test-container: 
STEP: delete the pod
Feb  6 13:23:03.456: INFO: Waiting for pod pod-38454713-61ec-4109-b03b-69a688c2836f to disappear
Feb  6 13:23:03.477: INFO: Pod pod-38454713-61ec-4109-b03b-69a688c2836f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:23:03.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3731" for this suite.
Feb  6 13:23:09.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:23:09.630: INFO: namespace emptydir-3731 deletion completed in 6.143565221s

• [SLOW TEST:16.523 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:23:09.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  6 13:23:09.749: INFO: Waiting up to 5m0s for pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603" in namespace "emptydir-1794" to be "success or failure"
Feb  6 13:23:09.755: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603": Phase="Pending", Reason="", readiness=false. Elapsed: 5.672306ms
Feb  6 13:23:11.766: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017115495s
Feb  6 13:23:13.776: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027539671s
Feb  6 13:23:15.793: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043793625s
Feb  6 13:23:17.806: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05730261s
STEP: Saw pod success
Feb  6 13:23:17.806: INFO: Pod "pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603" satisfied condition "success or failure"
Feb  6 13:23:17.823: INFO: Trying to get logs from node iruya-node pod pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603 container test-container: 
STEP: delete the pod
Feb  6 13:23:17.889: INFO: Waiting for pod pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603 to disappear
Feb  6 13:23:17.912: INFO: Pod pod-cc8bbe7c-7d56-4e4f-940b-c51ba5de5603 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:23:17.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1794" for this suite.
Feb  6 13:23:23.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:23:24.109: INFO: namespace emptydir-1794 deletion completed in 6.185948835s

• [SLOW TEST:14.479 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:23:24.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:23:24.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990" in namespace "projected-1289" to be "success or failure"
Feb  6 13:23:24.215: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Pending", Reason="", readiness=false. Elapsed: 9.610374ms
Feb  6 13:23:26.271: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065300496s
Feb  6 13:23:28.279: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073033968s
Feb  6 13:23:30.294: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088900681s
Feb  6 13:23:32.306: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10086628s
Feb  6 13:23:34.335: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129809923s
STEP: Saw pod success
Feb  6 13:23:34.335: INFO: Pod "downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990" satisfied condition "success or failure"
Feb  6 13:23:34.345: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990 container client-container: 
STEP: delete the pod
Feb  6 13:23:34.434: INFO: Waiting for pod downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990 to disappear
Feb  6 13:23:34.501: INFO: Pod downwardapi-volume-ad50312e-125d-436c-8dde-aa5c9be37990 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:23:34.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1289" for this suite.
Feb  6 13:23:40.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:23:40.655: INFO: namespace projected-1289 deletion completed in 6.143893482s

• [SLOW TEST:16.546 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:23:40.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4985.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 13:23:54.852: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.867: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.882: INFO: Unable to read wheezy_udp@PodARecord from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.886: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.889: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.896: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.902: INFO: Unable to read jessie_udp@PodARecord from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.907: INFO: Unable to read jessie_tcp@PodARecord from pod dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1: the server could not find the requested resource (get pods dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1)
Feb  6 13:23:54.907: INFO: Lookups using dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  6 13:23:59.993: INFO: DNS probes using dns-4985/dns-test-bbec5bdd-c50c-4a82-9319-f35924630fc1 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:24:00.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4985" for this suite.
Feb  6 13:24:06.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:24:06.222: INFO: namespace dns-4985 deletion completed in 6.156283321s

• [SLOW TEST:25.566 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:24:06.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  6 13:24:06.299: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 13:24:06.312: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 13:24:06.321: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  6 13:24:06.341: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  6 13:24:06.341: INFO: 	Container weave ready: true, restart count 0
Feb  6 13:24:06.341: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 13:24:06.341: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.341: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 13:24:06.341: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  6 13:24:06.354: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.354: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  6 13:24:06.354: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.354: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  6 13:24:06.354: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.354: INFO: 	Container coredns ready: true, restart count 0
Feb  6 13:24:06.354: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.354: INFO: 	Container etcd ready: true, restart count 0
Feb  6 13:24:06.354: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  6 13:24:06.355: INFO: 	Container weave ready: true, restart count 0
Feb  6 13:24:06.355: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 13:24:06.355: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.355: INFO: 	Container coredns ready: true, restart count 0
Feb  6 13:24:06.355: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.355: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  6 13:24:06.355: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  6 13:24:06.355: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f0d2fb67e707c3], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:24:07.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-428" for this suite.
Feb  6 13:24:13.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:24:13.587: INFO: namespace sched-pred-428 deletion completed in 6.140542302s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.365 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:24:13.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1847
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 13:24:13.731: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 13:24:48.021: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1847 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 13:24:48.022: INFO: >>> kubeConfig: /root/.kube/config
I0206 13:24:48.090133       8 log.go:172] (0xc0010126e0) (0xc001e60dc0) Create stream
I0206 13:24:48.090160       8 log.go:172] (0xc0010126e0) (0xc001e60dc0) Stream added, broadcasting: 1
I0206 13:24:48.095617       8 log.go:172] (0xc0010126e0) Reply frame received for 1
I0206 13:24:48.095644       8 log.go:172] (0xc0010126e0) (0xc001e60e60) Create stream
I0206 13:24:48.095650       8 log.go:172] (0xc0010126e0) (0xc001e60e60) Stream added, broadcasting: 3
I0206 13:24:48.097683       8 log.go:172] (0xc0010126e0) Reply frame received for 3
I0206 13:24:48.097709       8 log.go:172] (0xc0010126e0) (0xc0011ffd60) Create stream
I0206 13:24:48.097718       8 log.go:172] (0xc0010126e0) (0xc0011ffd60) Stream added, broadcasting: 5
I0206 13:24:48.100841       8 log.go:172] (0xc0010126e0) Reply frame received for 5
I0206 13:24:48.326581       8 log.go:172] (0xc0010126e0) Data frame received for 3
I0206 13:24:48.326659       8 log.go:172] (0xc001e60e60) (3) Data frame handling
I0206 13:24:48.326677       8 log.go:172] (0xc001e60e60) (3) Data frame sent
I0206 13:24:48.499821       8 log.go:172] (0xc0010126e0) (0xc001e60e60) Stream removed, broadcasting: 3
I0206 13:24:48.500098       8 log.go:172] (0xc0010126e0) (0xc0011ffd60) Stream removed, broadcasting: 5
I0206 13:24:48.500133       8 log.go:172] (0xc0010126e0) Data frame received for 1
I0206 13:24:48.500181       8 log.go:172] (0xc001e60dc0) (1) Data frame handling
I0206 13:24:48.500203       8 log.go:172] (0xc001e60dc0) (1) Data frame sent
I0206 13:24:48.500221       8 log.go:172] (0xc0010126e0) (0xc001e60dc0) Stream removed, broadcasting: 1
I0206 13:24:48.500239       8 log.go:172] (0xc0010126e0) Go away received
I0206 13:24:48.500413       8 log.go:172] (0xc0010126e0) (0xc001e60dc0) Stream removed, broadcasting: 1
I0206 13:24:48.500478       8 log.go:172] (0xc0010126e0) (0xc001e60e60) Stream removed, broadcasting: 3
I0206 13:24:48.500492       8 log.go:172] (0xc0010126e0) (0xc0011ffd60) Stream removed, broadcasting: 5
Feb  6 13:24:48.500: INFO: Waiting for endpoints: map[]
Feb  6 13:24:48.513: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1847 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 13:24:48.513: INFO: >>> kubeConfig: /root/.kube/config
I0206 13:24:48.603743       8 log.go:172] (0xc000826dc0) (0xc000112c80) Create stream
I0206 13:24:48.603861       8 log.go:172] (0xc000826dc0) (0xc000112c80) Stream added, broadcasting: 1
I0206 13:24:48.629472       8 log.go:172] (0xc000826dc0) Reply frame received for 1
I0206 13:24:48.629544       8 log.go:172] (0xc000826dc0) (0xc0012b1040) Create stream
I0206 13:24:48.629566       8 log.go:172] (0xc000826dc0) (0xc0012b1040) Stream added, broadcasting: 3
I0206 13:24:48.632067       8 log.go:172] (0xc000826dc0) Reply frame received for 3
I0206 13:24:48.632107       8 log.go:172] (0xc000826dc0) (0xc001bd72c0) Create stream
I0206 13:24:48.632131       8 log.go:172] (0xc000826dc0) (0xc001bd72c0) Stream added, broadcasting: 5
I0206 13:24:48.638011       8 log.go:172] (0xc000826dc0) Reply frame received for 5
I0206 13:24:48.835809       8 log.go:172] (0xc000826dc0) Data frame received for 3
I0206 13:24:48.835833       8 log.go:172] (0xc0012b1040) (3) Data frame handling
I0206 13:24:48.835847       8 log.go:172] (0xc0012b1040) (3) Data frame sent
I0206 13:24:48.989464       8 log.go:172] (0xc000826dc0) (0xc0012b1040) Stream removed, broadcasting: 3
I0206 13:24:48.989594       8 log.go:172] (0xc000826dc0) Data frame received for 1
I0206 13:24:48.989620       8 log.go:172] (0xc000826dc0) (0xc001bd72c0) Stream removed, broadcasting: 5
I0206 13:24:48.989652       8 log.go:172] (0xc000112c80) (1) Data frame handling
I0206 13:24:48.989730       8 log.go:172] (0xc000112c80) (1) Data frame sent
I0206 13:24:48.989761       8 log.go:172] (0xc000826dc0) (0xc000112c80) Stream removed, broadcasting: 1
I0206 13:24:48.989804       8 log.go:172] (0xc000826dc0) Go away received
I0206 13:24:48.989991       8 log.go:172] (0xc000826dc0) (0xc000112c80) Stream removed, broadcasting: 1
I0206 13:24:48.990026       8 log.go:172] (0xc000826dc0) (0xc0012b1040) Stream removed, broadcasting: 3
I0206 13:24:48.990042       8 log.go:172] (0xc000826dc0) (0xc001bd72c0) Stream removed, broadcasting: 5
Feb  6 13:24:48.990: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:24:48.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1847" for this suite.
Feb  6 13:25:11.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:25:11.175: INFO: namespace pod-network-test-1847 deletion completed in 22.174432898s

• [SLOW TEST:57.587 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:25:11.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:25:11.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720" in namespace "downward-api-4151" to be "success or failure"
Feb  6 13:25:11.295: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720": Phase="Pending", Reason="", readiness=false. Elapsed: 57.824993ms
Feb  6 13:25:13.303: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065416041s
Feb  6 13:25:15.309: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071685254s
Feb  6 13:25:17.375: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13769931s
Feb  6 13:25:19.387: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.149337218s
STEP: Saw pod success
Feb  6 13:25:19.387: INFO: Pod "downwardapi-volume-087358b9-f894-4823-a43c-155678b28720" satisfied condition "success or failure"
Feb  6 13:25:19.391: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-087358b9-f894-4823-a43c-155678b28720 container client-container: 
STEP: delete the pod
Feb  6 13:25:19.445: INFO: Waiting for pod downwardapi-volume-087358b9-f894-4823-a43c-155678b28720 to disappear
Feb  6 13:25:19.450: INFO: Pod downwardapi-volume-087358b9-f894-4823-a43c-155678b28720 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:25:19.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4151" for this suite.
Feb  6 13:25:25.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:25:25.716: INFO: namespace downward-api-4151 deletion completed in 6.259790952s

• [SLOW TEST:14.540 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:25:25.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  6 13:25:25.902: INFO: Waiting up to 5m0s for pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a" in namespace "downward-api-9524" to be "success or failure"
Feb  6 13:25:25.919: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.843848ms
Feb  6 13:25:28.023: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121760027s
Feb  6 13:25:30.032: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130662785s
Feb  6 13:25:32.041: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139005082s
Feb  6 13:25:34.048: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146040576s
Feb  6 13:25:36.057: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155073495s
STEP: Saw pod success
Feb  6 13:25:36.057: INFO: Pod "downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a" satisfied condition "success or failure"
Feb  6 13:25:36.067: INFO: Trying to get logs from node iruya-node pod downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a container dapi-container: 
STEP: delete the pod
Feb  6 13:25:36.145: INFO: Waiting for pod downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a to disappear
Feb  6 13:25:36.214: INFO: Pod downward-api-3c25bb0c-bd71-4fde-b780-def92b77e66a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:25:36.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9524" for this suite.
Feb  6 13:25:42.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:25:42.353: INFO: namespace downward-api-9524 deletion completed in 6.131822227s

• [SLOW TEST:16.635 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:25:42.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0206 13:25:52.483510       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 13:25:52.483: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:25:52.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5942" for this suite.
Feb  6 13:25:59.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:26:00.053: INFO: namespace gc-5942 deletion completed in 7.564582646s

• [SLOW TEST:17.699 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:26:00.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  6 13:26:08.486: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:26:08.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6882" for this suite.
Feb  6 13:26:14.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:26:14.828: INFO: namespace container-runtime-6882 deletion completed in 6.18507854s

• [SLOW TEST:14.773 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:26:14.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5378
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 13:26:14.929: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 13:26:55.138: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5378 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 13:26:55.138: INFO: >>> kubeConfig: /root/.kube/config
I0206 13:26:55.221095       8 log.go:172] (0xc00139be40) (0xc002f7b0e0) Create stream
I0206 13:26:55.221248       8 log.go:172] (0xc00139be40) (0xc002f7b0e0) Stream added, broadcasting: 1
I0206 13:26:55.229914       8 log.go:172] (0xc00139be40) Reply frame received for 1
I0206 13:26:55.229953       8 log.go:172] (0xc00139be40) (0xc002494000) Create stream
I0206 13:26:55.229965       8 log.go:172] (0xc00139be40) (0xc002494000) Stream added, broadcasting: 3
I0206 13:26:55.233595       8 log.go:172] (0xc00139be40) Reply frame received for 3
I0206 13:26:55.233644       8 log.go:172] (0xc00139be40) (0xc0024940a0) Create stream
I0206 13:26:55.233655       8 log.go:172] (0xc00139be40) (0xc0024940a0) Stream added, broadcasting: 5
I0206 13:26:55.236020       8 log.go:172] (0xc00139be40) Reply frame received for 5
I0206 13:26:55.394826       8 log.go:172] (0xc00139be40) Data frame received for 3
I0206 13:26:55.394872       8 log.go:172] (0xc002494000) (3) Data frame handling
I0206 13:26:55.394900       8 log.go:172] (0xc002494000) (3) Data frame sent
I0206 13:26:55.564994       8 log.go:172] (0xc00139be40) (0xc002494000) Stream removed, broadcasting: 3
I0206 13:26:55.565140       8 log.go:172] (0xc00139be40) Data frame received for 1
I0206 13:26:55.565189       8 log.go:172] (0xc002f7b0e0) (1) Data frame handling
I0206 13:26:55.565235       8 log.go:172] (0xc002f7b0e0) (1) Data frame sent
I0206 13:26:55.565258       8 log.go:172] (0xc00139be40) (0xc0024940a0) Stream removed, broadcasting: 5
I0206 13:26:55.565293       8 log.go:172] (0xc00139be40) (0xc002f7b0e0) Stream removed, broadcasting: 1
I0206 13:26:55.565322       8 log.go:172] (0xc00139be40) Go away received
I0206 13:26:55.565422       8 log.go:172] (0xc00139be40) (0xc002f7b0e0) Stream removed, broadcasting: 1
I0206 13:26:55.565442       8 log.go:172] (0xc00139be40) (0xc002494000) Stream removed, broadcasting: 3
I0206 13:26:55.565458       8 log.go:172] (0xc00139be40) (0xc0024940a0) Stream removed, broadcasting: 5
Feb  6 13:26:55.565: INFO: Found all expected endpoints: [netserver-0]
Feb  6 13:26:55.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5378 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 13:26:55.572: INFO: >>> kubeConfig: /root/.kube/config
I0206 13:26:55.644565       8 log.go:172] (0xc0019700b0) (0xc001eb06e0) Create stream
I0206 13:26:55.644759       8 log.go:172] (0xc0019700b0) (0xc001eb06e0) Stream added, broadcasting: 1
I0206 13:26:55.651857       8 log.go:172] (0xc0019700b0) Reply frame received for 1
I0206 13:26:55.651915       8 log.go:172] (0xc0019700b0) (0xc00201c000) Create stream
I0206 13:26:55.651934       8 log.go:172] (0xc0019700b0) (0xc00201c000) Stream added, broadcasting: 3
I0206 13:26:55.654180       8 log.go:172] (0xc0019700b0) Reply frame received for 3
I0206 13:26:55.654291       8 log.go:172] (0xc0019700b0) (0xc002f7b360) Create stream
I0206 13:26:55.654330       8 log.go:172] (0xc0019700b0) (0xc002f7b360) Stream added, broadcasting: 5
I0206 13:26:55.656516       8 log.go:172] (0xc0019700b0) Reply frame received for 5
I0206 13:26:55.765396       8 log.go:172] (0xc0019700b0) Data frame received for 3
I0206 13:26:55.765471       8 log.go:172] (0xc00201c000) (3) Data frame handling
I0206 13:26:55.765481       8 log.go:172] (0xc00201c000) (3) Data frame sent
I0206 13:26:55.901217       8 log.go:172] (0xc0019700b0) Data frame received for 1
I0206 13:26:55.901418       8 log.go:172] (0xc0019700b0) (0xc00201c000) Stream removed, broadcasting: 3
I0206 13:26:55.901493       8 log.go:172] (0xc001eb06e0) (1) Data frame handling
I0206 13:26:55.901600       8 log.go:172] (0xc001eb06e0) (1) Data frame sent
I0206 13:26:55.901653       8 log.go:172] (0xc0019700b0) (0xc002f7b360) Stream removed, broadcasting: 5
I0206 13:26:55.901696       8 log.go:172] (0xc0019700b0) (0xc001eb06e0) Stream removed, broadcasting: 1
I0206 13:26:55.901715       8 log.go:172] (0xc0019700b0) Go away received
I0206 13:26:55.902210       8 log.go:172] (0xc0019700b0) (0xc001eb06e0) Stream removed, broadcasting: 1
I0206 13:26:55.902374       8 log.go:172] (0xc0019700b0) (0xc00201c000) Stream removed, broadcasting: 3
I0206 13:26:55.902390       8 log.go:172] (0xc0019700b0) (0xc002f7b360) Stream removed, broadcasting: 5
Feb  6 13:26:55.902: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:26:55.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5378" for this suite.
Feb  6 13:27:19.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:27:20.055: INFO: namespace pod-network-test-5378 deletion completed in 24.133231078s

• [SLOW TEST:65.227 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:27:20.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 13:27:20.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4030'
Feb  6 13:27:20.397: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 13:27:20.397: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  6 13:27:20.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4030'
Feb  6 13:27:20.569: INFO: stderr: ""
Feb  6 13:27:20.569: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:27:20.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4030" for this suite.
Feb  6 13:27:42.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:27:42.752: INFO: namespace kubectl-4030 deletion completed in 22.173662101s

• [SLOW TEST:22.697 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:27:42.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 13:27:56.921: INFO: File wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-e6d08550-cade-4c89-9703-5628efb6a396 contains '' instead of 'foo.example.com.'
Feb  6 13:27:56.933: INFO: File jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-e6d08550-cade-4c89-9703-5628efb6a396 contains '' instead of 'foo.example.com.'
Feb  6 13:27:56.933: INFO: Lookups using dns-9545/dns-test-e6d08550-cade-4c89-9703-5628efb6a396 failed for: [wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local]

Feb  6 13:28:01.955: INFO: DNS probes using dns-test-e6d08550-cade-4c89-9703-5628efb6a396 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 13:28:18.155: INFO: File wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains '' instead of 'bar.example.com.'
Feb  6 13:28:18.169: INFO: File jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains '' instead of 'bar.example.com.'
Feb  6 13:28:18.169: INFO: Lookups using dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff failed for: [wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local]

Feb  6 13:28:23.190: INFO: File wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  6 13:28:23.209: INFO: File jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains '' instead of 'bar.example.com.'
Feb  6 13:28:23.209: INFO: Lookups using dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff failed for: [wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local]

Feb  6 13:28:28.184: INFO: File wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  6 13:28:28.190: INFO: File jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  6 13:28:28.190: INFO: Lookups using dns-9545/dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff failed for: [wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local]

Feb  6 13:28:33.215: INFO: DNS probes using dns-test-af3cd0be-1926-4a8c-8f59-8947e8d414ff succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9545.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 13:28:49.598: INFO: File wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-e23bd659-ea2a-44b5-8c2c-b473588f5050 contains '' instead of '10.96.108.215'
Feb  6 13:28:49.612: INFO: File jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local from pod  dns-9545/dns-test-e23bd659-ea2a-44b5-8c2c-b473588f5050 contains '' instead of '10.96.108.215'
Feb  6 13:28:49.612: INFO: Lookups using dns-9545/dns-test-e23bd659-ea2a-44b5-8c2c-b473588f5050 failed for: [wheezy_udp@dns-test-service-3.dns-9545.svc.cluster.local jessie_udp@dns-test-service-3.dns-9545.svc.cluster.local]

Feb  6 13:28:54.630: INFO: DNS probes using dns-test-e23bd659-ea2a-44b5-8c2c-b473588f5050 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:28:54.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9545" for this suite.
Feb  6 13:29:00.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:29:00.935: INFO: namespace dns-9545 deletion completed in 6.177661213s

• [SLOW TEST:78.182 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:29:00.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb  6 13:29:01.016: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:29:01.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5228" for this suite.
Feb  6 13:29:07.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:29:07.239: INFO: namespace kubectl-5228 deletion completed in 6.144027255s

• [SLOW TEST:6.304 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:29:07.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-39b069b8-6fcb-4b3a-9f0c-71356d3491b3
STEP: Creating a pod to test consume secrets
Feb  6 13:29:07.417: INFO: Waiting up to 5m0s for pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e" in namespace "secrets-3778" to be "success or failure"
Feb  6 13:29:07.455: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Pending", Reason="", readiness=false. Elapsed: 37.973372ms
Feb  6 13:29:09.464: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046585013s
Feb  6 13:29:11.473: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055768719s
Feb  6 13:29:13.481: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063620581s
Feb  6 13:29:15.489: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072112503s
Feb  6 13:29:17.534: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117240819s
STEP: Saw pod success
Feb  6 13:29:17.535: INFO: Pod "pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e" satisfied condition "success or failure"
Feb  6 13:29:17.541: INFO: Trying to get logs from node iruya-node pod pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e container secret-volume-test: 
STEP: delete the pod
Feb  6 13:29:17.621: INFO: Waiting for pod pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e to disappear
Feb  6 13:29:17.727: INFO: Pod pod-secrets-c33f3e74-7a28-4a72-989f-c2733861522e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:29:17.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3778" for this suite.
Feb  6 13:29:23.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:29:23.930: INFO: namespace secrets-3778 deletion completed in 6.187988714s

• [SLOW TEST:16.691 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:29:23.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  6 13:29:34.650: INFO: Successfully updated pod "labelsupdate3f13edf0-1e90-4b5c-8668-f0895b330b2a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:29:36.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6204" for this suite.
Feb  6 13:29:58.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:29:58.924: INFO: namespace projected-6204 deletion completed in 22.147574264s

• [SLOW TEST:34.994 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:29:58.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-dca2e788-4e0b-471c-9d8c-cfff7bc2e8f6
STEP: Creating a pod to test consume secrets
Feb  6 13:29:59.049: INFO: Waiting up to 5m0s for pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef" in namespace "secrets-3269" to be "success or failure"
Feb  6 13:29:59.058: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636454ms
Feb  6 13:30:01.067: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018007748s
Feb  6 13:30:03.071: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022104596s
Feb  6 13:30:05.080: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030930388s
Feb  6 13:30:07.089: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040147332s
Feb  6 13:30:09.099: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049635775s
STEP: Saw pod success
Feb  6 13:30:09.099: INFO: Pod "pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef" satisfied condition "success or failure"
Feb  6 13:30:09.103: INFO: Trying to get logs from node iruya-node pod pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef container secret-volume-test: 
STEP: delete the pod
Feb  6 13:30:09.261: INFO: Waiting for pod pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef to disappear
Feb  6 13:30:09.274: INFO: Pod pod-secrets-ca270efd-b2e6-43e9-9d14-eb307e66eeef no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:30:09.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3269" for this suite.
Feb  6 13:30:15.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:30:15.409: INFO: namespace secrets-3269 deletion completed in 6.131257924s

• [SLOW TEST:16.485 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:30:15.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:30:15.532: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774" in namespace "projected-484" to be "success or failure"
Feb  6 13:30:15.538: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196782ms
Feb  6 13:30:17.545: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012736909s
Feb  6 13:30:19.558: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025519983s
Feb  6 13:30:21.565: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033184983s
Feb  6 13:30:23.577: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044797541s
Feb  6 13:30:25.584: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.051420136s
STEP: Saw pod success
Feb  6 13:30:25.584: INFO: Pod "downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774" satisfied condition "success or failure"
Feb  6 13:30:25.587: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774 container client-container: 
STEP: delete the pod
Feb  6 13:30:25.652: INFO: Waiting for pod downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774 to disappear
Feb  6 13:30:25.671: INFO: Pod downwardapi-volume-58d97d03-09ba-41ad-9596-49aab9098774 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:30:25.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-484" for this suite.
Feb  6 13:30:31.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:30:31.961: INFO: namespace projected-484 deletion completed in 6.284817931s

• [SLOW TEST:16.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:30:31.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-xk4z6 in namespace proxy-3514
I0206 13:30:32.138362       8 runners.go:180] Created replication controller with name: proxy-service-xk4z6, namespace: proxy-3514, replica count: 1
I0206 13:30:33.188958       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:34.189199       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:35.189407       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:36.189664       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:37.189990       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:38.190188       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:39.190414       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:40.190711       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:30:41.190984       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:42.191218       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:43.191523       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:44.191769       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:45.191979       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:46.192256       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:47.192522       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:48.192796       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0206 13:30:49.193034       8 runners.go:180] proxy-service-xk4z6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  6 13:30:49.196: INFO: setup took 17.135648687s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  6 13:30:49.217: INFO: (0) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 20.825019ms)
Feb  6 13:30:49.219: INFO: (0) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 23.21523ms)
Feb  6 13:30:49.221: INFO: (0) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 24.760678ms)
Feb  6 13:30:49.221: INFO: (0) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 24.969873ms)
Feb  6 13:30:49.221: INFO: (0) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 25.122873ms)
Feb  6 13:30:49.221: INFO: (0) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 25.236642ms)
Feb  6 13:30:49.221: INFO: (0) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 25.306745ms)
Feb  6 13:30:49.222: INFO: (0) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 25.364641ms)
Feb  6 13:30:49.222: INFO: (0) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 25.473861ms)
Feb  6 13:30:49.225: INFO: (0) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 29.123122ms)
Feb  6 13:30:49.225: INFO: (0) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 29.092904ms)
Feb  6 13:30:49.228: INFO: (0) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 10.602642ms)
Feb  6 13:30:49.249: INFO: (1) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 12.20734ms)
Feb  6 13:30:49.250: INFO: (1) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 13.042897ms)
Feb  6 13:30:49.250: INFO: (1) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 13.097212ms)
Feb  6 13:30:49.251: INFO: (1) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 13.651815ms)
Feb  6 13:30:49.251: INFO: (1) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 13.79878ms)
Feb  6 13:30:49.252: INFO: (1) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 14.848871ms)
Feb  6 13:30:49.252: INFO: (1) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 15.166389ms)
Feb  6 13:30:49.252: INFO: (1) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 15.079399ms)
Feb  6 13:30:49.252: INFO: (1) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 15.021277ms)
Feb  6 13:30:49.253: INFO: (1) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 16.158037ms)
Feb  6 13:30:49.259: INFO: (2) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 5.985553ms)
Feb  6 13:30:49.262: INFO: (2) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 8.382007ms)
Feb  6 13:30:49.262: INFO: (2) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test<... (200; 9.025005ms)
Feb  6 13:30:49.263: INFO: (2) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.229016ms)
Feb  6 13:30:49.263: INFO: (2) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 9.864841ms)
Feb  6 13:30:49.264: INFO: (2) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 10.623234ms)
Feb  6 13:30:49.264: INFO: (2) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 10.97087ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 11.447987ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 11.330696ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 11.662792ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 11.734318ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 11.91675ms)
Feb  6 13:30:49.265: INFO: (2) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 11.853113ms)
Feb  6 13:30:49.266: INFO: (2) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 12.679238ms)
Feb  6 13:30:49.266: INFO: (2) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.790687ms)
Feb  6 13:30:49.273: INFO: (3) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 6.37521ms)
Feb  6 13:30:49.274: INFO: (3) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 7.992088ms)
Feb  6 13:30:49.274: INFO: (3) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 8.215205ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 8.251401ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.363467ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 9.104493ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.122508ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 9.206918ms)
Feb  6 13:30:49.275: INFO: (3) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.276491ms)
Feb  6 13:30:49.278: INFO: (3) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 11.546908ms)
Feb  6 13:30:49.278: INFO: (3) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 11.5486ms)
Feb  6 13:30:49.278: INFO: (3) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 12.002483ms)
Feb  6 13:30:49.279: INFO: (3) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 12.893827ms)
Feb  6 13:30:49.279: INFO: (3) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 13.139539ms)
Feb  6 13:30:49.286: INFO: (4) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 6.803283ms)
Feb  6 13:30:49.286: INFO: (4) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 6.661892ms)
Feb  6 13:30:49.287: INFO: (4) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 7.475975ms)
Feb  6 13:30:49.287: INFO: (4) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 7.579029ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 8.017503ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 8.068342ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 8.092988ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.203572ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.23038ms)
Feb  6 13:30:49.288: INFO: (4) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 8.313411ms)
Feb  6 13:30:49.299: INFO: (5) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 8.740091ms)
Feb  6 13:30:49.299: INFO: (5) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.004016ms)
Feb  6 13:30:49.299: INFO: (5) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 10.67535ms)
Feb  6 13:30:49.301: INFO: (5) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.923432ms)
Feb  6 13:30:49.301: INFO: (5) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 11.117566ms)
Feb  6 13:30:49.301: INFO: (5) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 11.196277ms)
Feb  6 13:30:49.301: INFO: (5) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 11.171816ms)
Feb  6 13:30:49.302: INFO: (5) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 11.968609ms)
Feb  6 13:30:49.307: INFO: (6) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 4.516041ms)
Feb  6 13:30:49.308: INFO: (6) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 5.703282ms)
Feb  6 13:30:49.309: INFO: (6) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 6.700896ms)
Feb  6 13:30:49.309: INFO: (6) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 6.94539ms)
Feb  6 13:30:49.310: INFO: (6) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 7.32678ms)
Feb  6 13:30:49.310: INFO: (6) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 7.57353ms)
Feb  6 13:30:49.310: INFO: (6) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 8.813497ms)
Feb  6 13:30:49.311: INFO: (6) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.252376ms)
Feb  6 13:30:49.312: INFO: (6) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.183522ms)
Feb  6 13:30:49.322: INFO: (6) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 19.549829ms)
Feb  6 13:30:49.322: INFO: (6) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 19.586494ms)
Feb  6 13:30:49.322: INFO: (6) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 19.695624ms)
Feb  6 13:30:49.322: INFO: (6) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 19.636305ms)
Feb  6 13:30:49.322: INFO: (6) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 19.726606ms)
Feb  6 13:30:49.323: INFO: (6) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 20.717869ms)
Feb  6 13:30:49.329: INFO: (7) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 5.770752ms)
Feb  6 13:30:49.329: INFO: (7) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 6.39302ms)
Feb  6 13:30:49.329: INFO: (7) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 6.281077ms)
Feb  6 13:30:49.330: INFO: (7) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 6.66495ms)
Feb  6 13:30:49.330: INFO: (7) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 8.482364ms)
Feb  6 13:30:49.332: INFO: (7) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 8.551957ms)
Feb  6 13:30:49.333: INFO: (7) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 9.924969ms)
Feb  6 13:30:49.333: INFO: (7) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 9.988602ms)
Feb  6 13:30:49.333: INFO: (7) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 10.241852ms)
Feb  6 13:30:49.335: INFO: (7) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 12.127802ms)
Feb  6 13:30:49.335: INFO: (7) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 12.261242ms)
Feb  6 13:30:49.336: INFO: (7) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.82277ms)
Feb  6 13:30:49.343: INFO: (8) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 6.557777ms)
Feb  6 13:30:49.343: INFO: (8) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 6.861995ms)
Feb  6 13:30:49.344: INFO: (8) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 7.500202ms)
Feb  6 13:30:49.344: INFO: (8) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.384214ms)
Feb  6 13:30:49.345: INFO: (8) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 8.966628ms)
Feb  6 13:30:49.345: INFO: (8) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 9.789523ms)
Feb  6 13:30:49.346: INFO: (8) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 10.543653ms)
Feb  6 13:30:49.346: INFO: (8) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 10.519262ms)
Feb  6 13:30:49.347: INFO: (8) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 10.53782ms)
Feb  6 13:30:49.347: INFO: (8) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.564589ms)
Feb  6 13:30:49.347: INFO: (8) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 10.909812ms)
Feb  6 13:30:49.348: INFO: (8) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 11.660468ms)
Feb  6 13:30:49.350: INFO: (8) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 13.610561ms)
Feb  6 13:30:49.350: INFO: (8) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 14.016352ms)
Feb  6 13:30:49.350: INFO: (8) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 14.212092ms)
Feb  6 13:30:49.355: INFO: (9) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 4.749181ms)
Feb  6 13:30:49.355: INFO: (9) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 4.88951ms)
Feb  6 13:30:49.355: INFO: (9) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test<... (200; 6.719231ms)
Feb  6 13:30:49.357: INFO: (9) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 7.125865ms)
Feb  6 13:30:49.357: INFO: (9) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 7.068123ms)
Feb  6 13:30:49.357: INFO: (9) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 7.150714ms)
Feb  6 13:30:49.364: INFO: (9) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 13.956202ms)
Feb  6 13:30:49.365: INFO: (9) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 14.446315ms)
Feb  6 13:30:49.365: INFO: (9) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 14.6268ms)
Feb  6 13:30:49.365: INFO: (9) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 14.964026ms)
Feb  6 13:30:49.365: INFO: (9) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 15.057879ms)
Feb  6 13:30:49.366: INFO: (9) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 15.309598ms)
Feb  6 13:30:49.366: INFO: (9) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 15.308197ms)
Feb  6 13:30:49.366: INFO: (9) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 15.96201ms)
Feb  6 13:30:49.375: INFO: (10) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 8.642732ms)
Feb  6 13:30:49.375: INFO: (10) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 8.999763ms)
Feb  6 13:30:49.376: INFO: (10) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.490351ms)
Feb  6 13:30:49.376: INFO: (10) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 9.501445ms)
Feb  6 13:30:49.376: INFO: (10) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 9.740335ms)
Feb  6 13:30:49.379: INFO: (10) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 12.696491ms)
Feb  6 13:30:49.380: INFO: (10) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 13.523049ms)
Feb  6 13:30:49.380: INFO: (10) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 13.54433ms)
Feb  6 13:30:49.380: INFO: (10) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 8.014093ms)
Feb  6 13:30:49.392: INFO: (11) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 8.666685ms)
Feb  6 13:30:49.393: INFO: (11) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 9.049761ms)
Feb  6 13:30:49.393: INFO: (11) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 8.966242ms)
Feb  6 13:30:49.393: INFO: (11) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 9.067176ms)
Feb  6 13:30:49.393: INFO: (11) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test<... (200; 8.999836ms)
Feb  6 13:30:49.393: INFO: (11) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 9.038192ms)
Feb  6 13:30:49.394: INFO: (11) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.407171ms)
Feb  6 13:30:49.396: INFO: (11) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 11.848179ms)
Feb  6 13:30:49.396: INFO: (11) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.244085ms)
Feb  6 13:30:49.398: INFO: (11) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 14.05422ms)
Feb  6 13:30:49.398: INFO: (11) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 14.091112ms)
Feb  6 13:30:49.398: INFO: (11) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 14.159739ms)
Feb  6 13:30:49.399: INFO: (11) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 15.138443ms)
Feb  6 13:30:49.406: INFO: (12) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 7.475032ms)
Feb  6 13:30:49.407: INFO: (12) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 7.756571ms)
Feb  6 13:30:49.407: INFO: (12) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 7.865184ms)
Feb  6 13:30:49.407: INFO: (12) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 7.791031ms)
Feb  6 13:30:49.407: INFO: (12) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 7.837056ms)
Feb  6 13:30:49.408: INFO: (12) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 8.881046ms)
Feb  6 13:30:49.408: INFO: (12) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 8.96651ms)
Feb  6 13:30:49.408: INFO: (12) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 9.642152ms)
Feb  6 13:30:49.409: INFO: (12) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.77663ms)
Feb  6 13:30:49.409: INFO: (12) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 9.844504ms)
Feb  6 13:30:49.410: INFO: (12) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 11.097879ms)
Feb  6 13:30:49.411: INFO: (12) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 11.9999ms)
Feb  6 13:30:49.413: INFO: (12) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 13.955008ms)
Feb  6 13:30:49.423: INFO: (13) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 9.547692ms)
Feb  6 13:30:49.423: INFO: (13) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.809484ms)
Feb  6 13:30:49.423: INFO: (13) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 10.339185ms)
Feb  6 13:30:49.423: INFO: (13) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 12.374807ms)
Feb  6 13:30:49.425: INFO: (13) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 12.305518ms)
Feb  6 13:30:49.425: INFO: (13) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 12.328973ms)
Feb  6 13:30:49.426: INFO: (13) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 12.514069ms)
Feb  6 13:30:49.426: INFO: (13) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 12.900773ms)
Feb  6 13:30:49.426: INFO: (13) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 12.778034ms)
Feb  6 13:30:49.429: INFO: (14) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 3.401057ms)
Feb  6 13:30:49.431: INFO: (14) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 4.891489ms)
Feb  6 13:30:49.431: INFO: (14) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 4.862688ms)
Feb  6 13:30:49.431: INFO: (14) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 5.190719ms)
Feb  6 13:30:49.431: INFO: (14) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 6.356828ms)
Feb  6 13:30:49.433: INFO: (14) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 6.672294ms)
Feb  6 13:30:49.433: INFO: (14) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 6.833746ms)
Feb  6 13:30:49.433: INFO: (14) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 6.871229ms)
Feb  6 13:30:49.433: INFO: (14) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 6.904738ms)
Feb  6 13:30:49.433: INFO: (14) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 7.085324ms)
Feb  6 13:30:49.434: INFO: (14) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 7.622068ms)
Feb  6 13:30:49.434: INFO: (14) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 8.471023ms)
Feb  6 13:30:49.436: INFO: (14) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.80797ms)
Feb  6 13:30:49.436: INFO: (14) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 10.143386ms)
Feb  6 13:30:49.446: INFO: (15) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 9.473307ms)
Feb  6 13:30:49.446: INFO: (15) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 18.747193ms)
Feb  6 13:30:49.455: INFO: (15) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 18.863503ms)
Feb  6 13:30:49.455: INFO: (15) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 18.796594ms)
Feb  6 13:30:49.455: INFO: (15) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 19.133016ms)
Feb  6 13:30:49.455: INFO: (15) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 19.08348ms)
Feb  6 13:30:49.456: INFO: (15) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 19.526851ms)
Feb  6 13:30:49.456: INFO: (15) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 19.491406ms)
Feb  6 13:30:49.456: INFO: (15) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 19.514936ms)
Feb  6 13:30:49.456: INFO: (15) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 19.541084ms)
Feb  6 13:30:49.456: INFO: (15) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 19.621745ms)
Feb  6 13:30:49.464: INFO: (16) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 7.914144ms)
Feb  6 13:30:49.464: INFO: (16) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 8.106602ms)
Feb  6 13:30:49.466: INFO: (16) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:1080/proxy/: ... (200; 9.815877ms)
Feb  6 13:30:49.466: INFO: (16) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: test (200; 10.349693ms)
Feb  6 13:30:49.467: INFO: (16) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 10.362755ms)
Feb  6 13:30:49.467: INFO: (16) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.413084ms)
Feb  6 13:30:49.467: INFO: (16) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 10.44989ms)
Feb  6 13:30:49.467: INFO: (16) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 10.540894ms)
Feb  6 13:30:49.468: INFO: (16) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 11.870275ms)
Feb  6 13:30:49.468: INFO: (16) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.164633ms)
Feb  6 13:30:49.469: INFO: (16) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 12.381692ms)
Feb  6 13:30:49.469: INFO: (16) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 12.438882ms)
Feb  6 13:30:49.469: INFO: (16) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 12.476615ms)
Feb  6 13:30:49.474: INFO: (17) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 5.749255ms)
Feb  6 13:30:49.478: INFO: (17) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 9.422178ms)
Feb  6 13:30:49.478: INFO: (17) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.775199ms)
Feb  6 13:30:49.479: INFO: (17) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 10.040643ms)
Feb  6 13:30:49.479: INFO: (17) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 10.982766ms)
Feb  6 13:30:49.480: INFO: (17) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 11.226435ms)
Feb  6 13:30:49.480: INFO: (17) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 11.287007ms)
Feb  6 13:30:49.480: INFO: (17) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 11.336372ms)
Feb  6 13:30:49.480: INFO: (17) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 11.478218ms)
Feb  6 13:30:49.480: INFO: (17) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 11.510252ms)
Feb  6 13:30:49.481: INFO: (17) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 11.848025ms)
Feb  6 13:30:49.481: INFO: (17) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.102798ms)
Feb  6 13:30:49.481: INFO: (17) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 12.17063ms)
Feb  6 13:30:49.482: INFO: (17) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 13.023493ms)
Feb  6 13:30:49.489: INFO: (18) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 7.052725ms)
Feb  6 13:30:49.489: INFO: (18) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 7.77698ms)
Feb  6 13:30:49.490: INFO: (18) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 8.066121ms)
Feb  6 13:30:49.490: INFO: (18) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 8.050594ms)
Feb  6 13:30:49.490: INFO: (18) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.322348ms)
Feb  6 13:30:49.490: INFO: (18) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 8.381029ms)
Feb  6 13:30:49.491: INFO: (18) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 8.508766ms)
Feb  6 13:30:49.491: INFO: (18) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 8.572044ms)
Feb  6 13:30:49.491: INFO: (18) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 8.890699ms)
Feb  6 13:30:49.491: INFO: (18) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 9.040397ms)
Feb  6 13:30:49.491: INFO: (18) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 9.51066ms)
Feb  6 13:30:49.492: INFO: (18) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 9.641304ms)
Feb  6 13:30:49.492: INFO: (18) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.829219ms)
Feb  6 13:30:49.492: INFO: (18) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 9.738737ms)
Feb  6 13:30:49.493: INFO: (18) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 10.778439ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:1080/proxy/: test<... (200; 9.565348ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.540653ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:443/proxy/: ... (200; 9.560952ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.593586ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/http:proxy-service-xk4z6-pbgkb:162/proxy/: bar (200; 9.623262ms)
Feb  6 13:30:49.502: INFO: (19) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:460/proxy/: tls baz (200; 9.651013ms)
Feb  6 13:30:49.503: INFO: (19) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb:160/proxy/: foo (200; 9.663881ms)
Feb  6 13:30:49.503: INFO: (19) /api/v1/namespaces/proxy-3514/pods/https:proxy-service-xk4z6-pbgkb:462/proxy/: tls qux (200; 9.84621ms)
Feb  6 13:30:49.503: INFO: (19) /api/v1/namespaces/proxy-3514/pods/proxy-service-xk4z6-pbgkb/proxy/: test (200; 9.896109ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname1/proxy/: foo (200; 11.99431ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/proxy-service-xk4z6:portname2/proxy/: bar (200; 12.001938ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname1/proxy/: foo (200; 12.146029ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname1/proxy/: tls baz (200; 12.336621ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/https:proxy-service-xk4z6:tlsportname2/proxy/: tls qux (200; 12.278097ms)
Feb  6 13:30:49.505: INFO: (19) /api/v1/namespaces/proxy-3514/services/http:proxy-service-xk4z6:portname2/proxy/: bar (200; 12.277436ms)
STEP: deleting ReplicationController proxy-service-xk4z6 in namespace proxy-3514, will wait for the garbage collector to delete the pods
Feb  6 13:30:49.563: INFO: Deleting ReplicationController proxy-service-xk4z6 took: 5.545896ms
Feb  6 13:30:49.864: INFO: Terminating ReplicationController proxy-service-xk4z6 pods took: 300.548159ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:30:55.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3514" for this suite.
Feb  6 13:31:01.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:31:01.206: INFO: namespace proxy-3514 deletion completed in 6.134524502s

• [SLOW TEST:29.244 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:31:01.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:31:01.327: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  6 13:31:01.342: INFO: Number of nodes with available pods: 0
Feb  6 13:31:01.342: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  6 13:31:01.374: INFO: Number of nodes with available pods: 0
Feb  6 13:31:01.374: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:02.384: INFO: Number of nodes with available pods: 0
Feb  6 13:31:02.384: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:03.381: INFO: Number of nodes with available pods: 0
Feb  6 13:31:03.381: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:04.385: INFO: Number of nodes with available pods: 0
Feb  6 13:31:04.385: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:05.382: INFO: Number of nodes with available pods: 0
Feb  6 13:31:05.382: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:06.381: INFO: Number of nodes with available pods: 0
Feb  6 13:31:06.381: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:07.392: INFO: Number of nodes with available pods: 0
Feb  6 13:31:07.392: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:08.429: INFO: Number of nodes with available pods: 0
Feb  6 13:31:08.429: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:09.382: INFO: Number of nodes with available pods: 1
Feb  6 13:31:09.382: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  6 13:31:09.426: INFO: Number of nodes with available pods: 1
Feb  6 13:31:09.426: INFO: Number of running nodes: 0, number of available pods: 1
Feb  6 13:31:10.432: INFO: Number of nodes with available pods: 0
Feb  6 13:31:10.432: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  6 13:31:10.502: INFO: Number of nodes with available pods: 0
Feb  6 13:31:10.502: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:11.513: INFO: Number of nodes with available pods: 0
Feb  6 13:31:11.513: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:12.515: INFO: Number of nodes with available pods: 0
Feb  6 13:31:12.515: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:14.422: INFO: Number of nodes with available pods: 0
Feb  6 13:31:14.422: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:14.510: INFO: Number of nodes with available pods: 0
Feb  6 13:31:14.510: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:15.511: INFO: Number of nodes with available pods: 0
Feb  6 13:31:15.511: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:16.511: INFO: Number of nodes with available pods: 0
Feb  6 13:31:16.511: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:17.576: INFO: Number of nodes with available pods: 0
Feb  6 13:31:17.576: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:18.512: INFO: Number of nodes with available pods: 0
Feb  6 13:31:18.512: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:19.511: INFO: Number of nodes with available pods: 0
Feb  6 13:31:19.511: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:20.524: INFO: Number of nodes with available pods: 0
Feb  6 13:31:20.524: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:21.513: INFO: Number of nodes with available pods: 0
Feb  6 13:31:21.513: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:22.530: INFO: Number of nodes with available pods: 0
Feb  6 13:31:22.531: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:23.511: INFO: Number of nodes with available pods: 0
Feb  6 13:31:23.512: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:24.522: INFO: Number of nodes with available pods: 0
Feb  6 13:31:24.522: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:25.512: INFO: Number of nodes with available pods: 0
Feb  6 13:31:25.512: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:26.513: INFO: Number of nodes with available pods: 0
Feb  6 13:31:26.513: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:27.514: INFO: Number of nodes with available pods: 0
Feb  6 13:31:27.514: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:28.512: INFO: Number of nodes with available pods: 0
Feb  6 13:31:28.512: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:29.509: INFO: Number of nodes with available pods: 0
Feb  6 13:31:29.509: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:30.514: INFO: Number of nodes with available pods: 0
Feb  6 13:31:30.514: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:31.511: INFO: Number of nodes with available pods: 0
Feb  6 13:31:31.511: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:32.530: INFO: Number of nodes with available pods: 0
Feb  6 13:31:32.530: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:33.547: INFO: Number of nodes with available pods: 0
Feb  6 13:31:33.547: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:31:34.521: INFO: Number of nodes with available pods: 1
Feb  6 13:31:34.521: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3140, will wait for the garbage collector to delete the pods
Feb  6 13:31:34.641: INFO: Deleting DaemonSet.extensions daemon-set took: 26.720338ms
Feb  6 13:31:34.942: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.317838ms
Feb  6 13:31:41.449: INFO: Number of nodes with available pods: 0
Feb  6 13:31:41.449: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 13:31:41.455: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3140/daemonsets","resourceVersion":"23319205"},"items":null}

Feb  6 13:31:41.458: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3140/pods","resourceVersion":"23319205"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:31:41.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3140" for this suite.
Feb  6 13:31:47.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:31:47.659: INFO: namespace daemonsets-3140 deletion completed in 6.162721828s

• [SLOW TEST:46.453 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:31:47.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  6 13:32:05.910: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:05.947: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:07.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:07.973: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:09.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:09.960: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:11.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:11.964: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:13.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:13.963: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:15.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:15.956: INFO: Pod pod-with-poststart-http-hook still exists
Feb  6 13:32:17.948: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  6 13:32:17.963: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:32:17.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7420" for this suite.
Feb  6 13:32:39.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:32:40.099: INFO: namespace container-lifecycle-hook-7420 deletion completed in 22.130877842s

• [SLOW TEST:52.439 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:32:40.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb  6 13:32:40.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9473 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb  6 13:32:53.492: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0206 13:32:48.389052     851 log.go:172] (0xc000a66160) (0xc0005d01e0) Create stream\nI0206 13:32:48.389218     851 log.go:172] (0xc000a66160) (0xc0005d01e0) Stream added, broadcasting: 1\nI0206 13:32:48.395306     851 log.go:172] (0xc000a66160) Reply frame received for 1\nI0206 13:32:48.395342     851 log.go:172] (0xc000a66160) (0xc000316000) Create stream\nI0206 13:32:48.395350     851 log.go:172] (0xc000a66160) (0xc000316000) Stream added, broadcasting: 3\nI0206 13:32:48.397240     851 log.go:172] (0xc000a66160) Reply frame received for 3\nI0206 13:32:48.397287     851 log.go:172] (0xc000a66160) (0xc0005d0280) Create stream\nI0206 13:32:48.397300     851 log.go:172] (0xc000a66160) (0xc0005d0280) Stream added, broadcasting: 5\nI0206 13:32:48.400925     851 log.go:172] (0xc000a66160) Reply frame received for 5\nI0206 13:32:48.401012     851 log.go:172] (0xc000a66160) (0xc000338000) Create stream\nI0206 13:32:48.401032     851 log.go:172] (0xc000a66160) (0xc000338000) Stream added, broadcasting: 7\nI0206 13:32:48.403080     851 log.go:172] (0xc000a66160) Reply frame received for 7\nI0206 13:32:48.403808     851 log.go:172] (0xc000316000) (3) Writing data frame\nI0206 13:32:48.404045     851 log.go:172] (0xc000316000) (3) Writing data frame\nI0206 13:32:48.412579     851 log.go:172] (0xc000a66160) Data frame received for 5\nI0206 13:32:48.412622     851 log.go:172] (0xc0005d0280) (5) Data frame handling\nI0206 13:32:48.412640     851 log.go:172] (0xc0005d0280) (5) Data frame sent\nI0206 13:32:48.417047     851 log.go:172] (0xc000a66160) Data frame received for 5\nI0206 13:32:48.417076     851 log.go:172] (0xc0005d0280) (5) Data frame handling\nI0206 13:32:48.417094     851 log.go:172] (0xc0005d0280) (5) Data frame sent\nI0206 13:32:49.835181     851 log.go:172] (0xc000a66160) (0xc000316000) Stream removed, broadcasting: 3\nI0206 13:32:49.835328     851 log.go:172] (0xc000a66160) Data frame received for 1\nI0206 13:32:49.835367     851 log.go:172] (0xc0005d01e0) (1) Data frame handling\nI0206 13:32:49.835394     851 log.go:172] (0xc0005d01e0) (1) Data frame sent\nI0206 13:32:49.835458     851 log.go:172] (0xc000a66160) (0xc000338000) Stream removed, broadcasting: 7\nI0206 13:32:49.835534     851 log.go:172] (0xc000a66160) (0xc0005d0280) Stream removed, broadcasting: 5\nI0206 13:32:49.835584     851 log.go:172] (0xc000a66160) (0xc0005d01e0) Stream removed, broadcasting: 1\nI0206 13:32:49.835606     851 log.go:172] (0xc000a66160) Go away received\nI0206 13:32:49.835953     851 log.go:172] (0xc000a66160) (0xc0005d01e0) Stream removed, broadcasting: 1\nI0206 13:32:49.835980     851 log.go:172] (0xc000a66160) (0xc000316000) Stream removed, broadcasting: 3\nI0206 13:32:49.835997     851 log.go:172] (0xc000a66160) (0xc0005d0280) Stream removed, broadcasting: 5\nI0206 13:32:49.836009     851 log.go:172] (0xc000a66160) (0xc000338000) Stream removed, broadcasting: 7\n"
Feb  6 13:32:53.492: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:32:55.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9473" for this suite.
Feb  6 13:33:01.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:33:01.743: INFO: namespace kubectl-9473 deletion completed in 6.226427143s

• [SLOW TEST:21.644 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:33:01.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:33:09.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2772" for this suite.
Feb  6 13:34:01.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:34:02.046: INFO: namespace kubelet-test-2772 deletion completed in 52.178684681s

• [SLOW TEST:60.303 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:34:02.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-7b09ab1e-f449-417f-8a20-278bb84bfc1e
STEP: Creating a pod to test consume configMaps
Feb  6 13:34:02.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab" in namespace "projected-9267" to be "success or failure"
Feb  6 13:34:02.140: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.463199ms
Feb  6 13:34:04.155: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019354914s
Feb  6 13:34:06.161: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024948169s
Feb  6 13:34:08.169: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033099866s
Feb  6 13:34:10.187: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051605071s
STEP: Saw pod success
Feb  6 13:34:10.188: INFO: Pod "pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab" satisfied condition "success or failure"
Feb  6 13:34:10.193: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 13:34:10.270: INFO: Waiting for pod pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab to disappear
Feb  6 13:34:10.278: INFO: Pod pod-projected-configmaps-c4cf0e89-2a3e-4bae-8fb4-6986f7914aab no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:34:10.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9267" for this suite.
Feb  6 13:34:16.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:34:16.500: INFO: namespace projected-9267 deletion completed in 6.217605432s

• [SLOW TEST:14.453 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:34:16.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  6 13:34:34.849: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:34.858: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:36.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:36.891: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:38.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:38.872: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:40.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:40.875: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:42.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:42.873: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:44.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:44.876: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:46.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:46.870: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:48.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:48.869: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:50.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:50.898: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:52.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:52.890: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:54.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:54.903: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:56.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:56.876: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:34:58.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:34:58.898: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:35:00.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:35:00.871: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:35:02.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:35:02.877: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:35:04.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:35:04.870: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  6 13:35:06.859: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  6 13:35:06.872: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:35:06.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1803" for this suite.
Feb  6 13:35:28.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:35:29.078: INFO: namespace container-lifecycle-hook-1803 deletion completed in 22.135751885s

• [SLOW TEST:72.578 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:35:29.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8484
I0206 13:35:29.136732       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8484, replica count: 1
I0206 13:35:30.187287       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:31.187615       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:32.187946       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:33.188317       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:34.188583       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:35.188975       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:36.189275       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0206 13:35:37.189621       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  6 13:35:37.349: INFO: Created: latency-svc-zncmw
Feb  6 13:35:37.420: INFO: Got endpoints: latency-svc-zncmw [130.840524ms]
Feb  6 13:35:37.514: INFO: Created: latency-svc-jc2gk
Feb  6 13:35:37.639: INFO: Created: latency-svc-xqdxq
Feb  6 13:35:37.650: INFO: Got endpoints: latency-svc-jc2gk [229.261845ms]
Feb  6 13:35:37.684: INFO: Got endpoints: latency-svc-xqdxq [263.550011ms]
Feb  6 13:35:37.828: INFO: Created: latency-svc-zd995
Feb  6 13:35:37.834: INFO: Got endpoints: latency-svc-zd995 [411.820341ms]
Feb  6 13:35:37.874: INFO: Created: latency-svc-d4pz4
Feb  6 13:35:37.890: INFO: Got endpoints: latency-svc-d4pz4 [468.496787ms]
Feb  6 13:35:37.923: INFO: Created: latency-svc-kcqtz
Feb  6 13:35:38.031: INFO: Got endpoints: latency-svc-kcqtz [610.031903ms]
Feb  6 13:35:38.039: INFO: Created: latency-svc-rz6pt
Feb  6 13:35:38.062: INFO: Got endpoints: latency-svc-rz6pt [641.089349ms]
Feb  6 13:35:38.101: INFO: Created: latency-svc-6knlg
Feb  6 13:35:38.101: INFO: Got endpoints: latency-svc-6knlg [679.360839ms]
Feb  6 13:35:38.194: INFO: Created: latency-svc-5hkrh
Feb  6 13:35:38.209: INFO: Got endpoints: latency-svc-5hkrh [787.401412ms]
Feb  6 13:35:38.238: INFO: Created: latency-svc-v6559
Feb  6 13:35:38.246: INFO: Got endpoints: latency-svc-v6559 [824.245544ms]
Feb  6 13:35:38.279: INFO: Created: latency-svc-sqn2k
Feb  6 13:35:38.283: INFO: Got endpoints: latency-svc-sqn2k [861.996086ms]
Feb  6 13:35:38.407: INFO: Created: latency-svc-ldr7c
Feb  6 13:35:38.429: INFO: Got endpoints: latency-svc-ldr7c [1.006987108s]
Feb  6 13:35:38.512: INFO: Created: latency-svc-6kbms
Feb  6 13:35:38.564: INFO: Got endpoints: latency-svc-6kbms [1.142894185s]
Feb  6 13:35:38.602: INFO: Created: latency-svc-bjcj5
Feb  6 13:35:38.613: INFO: Got endpoints: latency-svc-bjcj5 [1.190948884s]
Feb  6 13:35:38.662: INFO: Created: latency-svc-stbjd
Feb  6 13:35:38.736: INFO: Got endpoints: latency-svc-stbjd [1.314307009s]
Feb  6 13:35:38.772: INFO: Created: latency-svc-4ttrq
Feb  6 13:35:38.774: INFO: Got endpoints: latency-svc-4ttrq [1.352670076s]
Feb  6 13:35:38.962: INFO: Created: latency-svc-74bjm
Feb  6 13:35:38.973: INFO: Got endpoints: latency-svc-74bjm [1.322761342s]
Feb  6 13:35:39.019: INFO: Created: latency-svc-gpsrt
Feb  6 13:35:39.029: INFO: Got endpoints: latency-svc-gpsrt [1.34525304s]
Feb  6 13:35:39.181: INFO: Created: latency-svc-2vdhk
Feb  6 13:35:39.186: INFO: Got endpoints: latency-svc-2vdhk [212.327771ms]
Feb  6 13:35:39.233: INFO: Created: latency-svc-4q45g
Feb  6 13:35:39.241: INFO: Got endpoints: latency-svc-4q45g [1.406748194s]
Feb  6 13:35:39.349: INFO: Created: latency-svc-w4qkc
Feb  6 13:35:39.357: INFO: Got endpoints: latency-svc-w4qkc [1.466948241s]
Feb  6 13:35:39.407: INFO: Created: latency-svc-7rwjl
Feb  6 13:35:39.420: INFO: Got endpoints: latency-svc-7rwjl [1.388523779s]
Feb  6 13:35:39.554: INFO: Created: latency-svc-29hqh
Feb  6 13:35:39.560: INFO: Got endpoints: latency-svc-29hqh [1.497834679s]
Feb  6 13:35:39.596: INFO: Created: latency-svc-6qn96
Feb  6 13:35:39.608: INFO: Got endpoints: latency-svc-6qn96 [1.506963468s]
Feb  6 13:35:39.771: INFO: Created: latency-svc-cxxns
Feb  6 13:35:39.792: INFO: Got endpoints: latency-svc-cxxns [1.582765485s]
Feb  6 13:35:39.937: INFO: Created: latency-svc-58r4b
Feb  6 13:35:39.950: INFO: Got endpoints: latency-svc-58r4b [1.704292739s]
Feb  6 13:35:40.014: INFO: Created: latency-svc-5x46q
Feb  6 13:35:40.156: INFO: Got endpoints: latency-svc-5x46q [1.872694383s]
Feb  6 13:35:40.217: INFO: Created: latency-svc-mxnpr
Feb  6 13:35:40.227: INFO: Got endpoints: latency-svc-mxnpr [1.79813937s]
Feb  6 13:35:40.362: INFO: Created: latency-svc-prvls
Feb  6 13:35:40.376: INFO: Got endpoints: latency-svc-prvls [1.811852496s]
Feb  6 13:35:40.447: INFO: Created: latency-svc-92mqc
Feb  6 13:35:40.584: INFO: Got endpoints: latency-svc-92mqc [1.970770866s]
Feb  6 13:35:40.654: INFO: Created: latency-svc-d9lhs
Feb  6 13:35:40.655: INFO: Got endpoints: latency-svc-d9lhs [1.919184122s]
Feb  6 13:35:40.751: INFO: Created: latency-svc-nsh2l
Feb  6 13:35:40.763: INFO: Got endpoints: latency-svc-nsh2l [1.988607646s]
Feb  6 13:35:40.830: INFO: Created: latency-svc-79b5z
Feb  6 13:35:40.971: INFO: Got endpoints: latency-svc-79b5z [1.941170405s]
Feb  6 13:35:41.003: INFO: Created: latency-svc-j4v5t
Feb  6 13:35:41.142: INFO: Got endpoints: latency-svc-j4v5t [1.95584937s]
Feb  6 13:35:41.144: INFO: Created: latency-svc-t6j8g
Feb  6 13:35:41.153: INFO: Got endpoints: latency-svc-t6j8g [1.911885133s]
Feb  6 13:35:41.214: INFO: Created: latency-svc-l44l9
Feb  6 13:35:41.221: INFO: Got endpoints: latency-svc-l44l9 [1.864121285s]
Feb  6 13:35:41.346: INFO: Created: latency-svc-2tljr
Feb  6 13:35:41.347: INFO: Got endpoints: latency-svc-2tljr [1.926751907s]
Feb  6 13:35:41.377: INFO: Created: latency-svc-b8hfk
Feb  6 13:35:41.405: INFO: Got endpoints: latency-svc-b8hfk [1.844909906s]
Feb  6 13:35:41.423: INFO: Created: latency-svc-6lt6r
Feb  6 13:35:41.474: INFO: Got endpoints: latency-svc-6lt6r [1.866608881s]
Feb  6 13:35:41.489: INFO: Created: latency-svc-fb624
Feb  6 13:35:41.497: INFO: Got endpoints: latency-svc-fb624 [1.705086038s]
Feb  6 13:35:41.526: INFO: Created: latency-svc-sjzbl
Feb  6 13:35:41.531: INFO: Got endpoints: latency-svc-sjzbl [1.580988603s]
Feb  6 13:35:41.560: INFO: Created: latency-svc-sx5gc
Feb  6 13:35:41.641: INFO: Got endpoints: latency-svc-sx5gc [1.484100901s]
Feb  6 13:35:41.649: INFO: Created: latency-svc-9dmzp
Feb  6 13:35:41.654: INFO: Got endpoints: latency-svc-9dmzp [1.426625612s]
Feb  6 13:35:41.688: INFO: Created: latency-svc-qk5kb
Feb  6 13:35:41.727: INFO: Created: latency-svc-r4n5n
Feb  6 13:35:41.735: INFO: Got endpoints: latency-svc-qk5kb [1.358819301s]
Feb  6 13:35:41.757: INFO: Got endpoints: latency-svc-r4n5n [1.172946469s]
Feb  6 13:35:41.885: INFO: Created: latency-svc-jgglj
Feb  6 13:35:41.888: INFO: Got endpoints: latency-svc-jgglj [1.232666216s]
Feb  6 13:35:41.937: INFO: Created: latency-svc-flds2
Feb  6 13:35:41.946: INFO: Got endpoints: latency-svc-flds2 [1.183332215s]
Feb  6 13:35:42.010: INFO: Created: latency-svc-28zqk
Feb  6 13:35:42.033: INFO: Got endpoints: latency-svc-28zqk [1.062120963s]
Feb  6 13:35:42.056: INFO: Created: latency-svc-7nhvf
Feb  6 13:35:42.068: INFO: Got endpoints: latency-svc-7nhvf [926.53878ms]
Feb  6 13:35:42.152: INFO: Created: latency-svc-zs5s5
Feb  6 13:35:42.163: INFO: Got endpoints: latency-svc-zs5s5 [1.010210364s]
Feb  6 13:35:42.201: INFO: Created: latency-svc-mxp6z
Feb  6 13:35:42.205: INFO: Got endpoints: latency-svc-mxp6z [983.361753ms]
Feb  6 13:35:42.234: INFO: Created: latency-svc-g7l5h
Feb  6 13:35:42.298: INFO: Got endpoints: latency-svc-g7l5h [951.236805ms]
Feb  6 13:35:42.323: INFO: Created: latency-svc-qczdm
Feb  6 13:35:42.333: INFO: Got endpoints: latency-svc-qczdm [927.819346ms]
Feb  6 13:35:42.376: INFO: Created: latency-svc-vkflh
Feb  6 13:35:42.393: INFO: Got endpoints: latency-svc-vkflh [917.869386ms]
Feb  6 13:35:42.489: INFO: Created: latency-svc-pl4qc
Feb  6 13:35:42.521: INFO: Created: latency-svc-mwmhf
Feb  6 13:35:42.521: INFO: Got endpoints: latency-svc-pl4qc [1.024195982s]
Feb  6 13:35:42.532: INFO: Got endpoints: latency-svc-mwmhf [1.000231209s]
Feb  6 13:35:42.630: INFO: Created: latency-svc-7rqjw
Feb  6 13:35:42.630: INFO: Got endpoints: latency-svc-7rqjw [989.831689ms]
Feb  6 13:35:42.667: INFO: Created: latency-svc-xxszh
Feb  6 13:35:42.681: INFO: Got endpoints: latency-svc-xxszh [1.027304071s]
Feb  6 13:35:42.768: INFO: Created: latency-svc-kzllm
Feb  6 13:35:42.775: INFO: Got endpoints: latency-svc-kzllm [1.039615678s]
Feb  6 13:35:42.810: INFO: Created: latency-svc-qnnn9
Feb  6 13:35:42.829: INFO: Got endpoints: latency-svc-qnnn9 [1.071316161s]
Feb  6 13:35:42.912: INFO: Created: latency-svc-gqplv
Feb  6 13:35:42.925: INFO: Got endpoints: latency-svc-gqplv [1.03634664s]
Feb  6 13:35:42.959: INFO: Created: latency-svc-tgbch
Feb  6 13:35:42.970: INFO: Got endpoints: latency-svc-tgbch [1.022969009s]
Feb  6 13:35:43.106: INFO: Created: latency-svc-tjpv6
Feb  6 13:35:43.158: INFO: Got endpoints: latency-svc-tjpv6 [1.124915201s]
Feb  6 13:35:43.199: INFO: Created: latency-svc-5xglq
Feb  6 13:35:43.319: INFO: Got endpoints: latency-svc-5xglq [1.250228918s]
Feb  6 13:35:43.432: INFO: Created: latency-svc-th7kf
Feb  6 13:35:43.538: INFO: Got endpoints: latency-svc-th7kf [1.374636018s]
Feb  6 13:35:43.602: INFO: Created: latency-svc-9wwnw
Feb  6 13:35:43.618: INFO: Got endpoints: latency-svc-9wwnw [1.413054353s]
Feb  6 13:35:43.717: INFO: Created: latency-svc-zxkbm
Feb  6 13:35:43.717: INFO: Got endpoints: latency-svc-zxkbm [1.418240807s]
Feb  6 13:35:43.771: INFO: Created: latency-svc-wzphf
Feb  6 13:35:43.873: INFO: Got endpoints: latency-svc-wzphf [1.540607842s]
Feb  6 13:35:43.874: INFO: Created: latency-svc-tnsrs
Feb  6 13:35:43.938: INFO: Created: latency-svc-cfx54
Feb  6 13:35:43.939: INFO: Got endpoints: latency-svc-tnsrs [1.545776398s]
Feb  6 13:35:43.953: INFO: Got endpoints: latency-svc-cfx54 [1.431206556s]
Feb  6 13:35:44.077: INFO: Created: latency-svc-gsc24
Feb  6 13:35:44.097: INFO: Got endpoints: latency-svc-gsc24 [1.565072844s]
Feb  6 13:35:44.147: INFO: Created: latency-svc-q8nz4
Feb  6 13:35:44.241: INFO: Created: latency-svc-dgcgk
Feb  6 13:35:44.242: INFO: Got endpoints: latency-svc-q8nz4 [1.611663139s]
Feb  6 13:35:44.267: INFO: Got endpoints: latency-svc-dgcgk [1.585698583s]
Feb  6 13:35:44.315: INFO: Created: latency-svc-zbjsm
Feb  6 13:35:44.333: INFO: Got endpoints: latency-svc-zbjsm [1.55812191s]
Feb  6 13:35:44.422: INFO: Created: latency-svc-lbrdx
Feb  6 13:35:44.465: INFO: Got endpoints: latency-svc-lbrdx [1.636328537s]
Feb  6 13:35:44.498: INFO: Created: latency-svc-8xclm
Feb  6 13:35:44.499: INFO: Got endpoints: latency-svc-8xclm [1.573798254s]
Feb  6 13:35:44.595: INFO: Created: latency-svc-fhgzk
Feb  6 13:35:44.645: INFO: Got endpoints: latency-svc-fhgzk [1.675550364s]
Feb  6 13:35:44.656: INFO: Created: latency-svc-rhdbl
Feb  6 13:35:44.741: INFO: Got endpoints: latency-svc-rhdbl [1.582596888s]
Feb  6 13:35:44.786: INFO: Created: latency-svc-5fvz9
Feb  6 13:35:44.824: INFO: Created: latency-svc-2dcnp
Feb  6 13:35:44.824: INFO: Got endpoints: latency-svc-5fvz9 [1.504812236s]
Feb  6 13:35:44.836: INFO: Got endpoints: latency-svc-2dcnp [1.298241985s]
Feb  6 13:35:44.950: INFO: Created: latency-svc-vsc4v
Feb  6 13:35:44.998: INFO: Got endpoints: latency-svc-vsc4v [1.380170513s]
Feb  6 13:35:45.008: INFO: Created: latency-svc-phx9v
Feb  6 13:35:45.023: INFO: Got endpoints: latency-svc-phx9v [1.306839848s]
Feb  6 13:35:45.136: INFO: Created: latency-svc-gwpmr
Feb  6 13:35:45.178: INFO: Got endpoints: latency-svc-gwpmr [1.303656509s]
Feb  6 13:35:45.212: INFO: Created: latency-svc-lxf42
Feb  6 13:35:45.217: INFO: Got endpoints: latency-svc-lxf42 [1.278303501s]
Feb  6 13:35:45.315: INFO: Created: latency-svc-zdl87
Feb  6 13:35:45.326: INFO: Got endpoints: latency-svc-zdl87 [1.372425785s]
Feb  6 13:35:45.573: INFO: Created: latency-svc-7t8s2
Feb  6 13:35:45.619: INFO: Got endpoints: latency-svc-7t8s2 [1.521460418s]
Feb  6 13:35:45.620: INFO: Created: latency-svc-7n6n7
Feb  6 13:35:45.621: INFO: Got endpoints: latency-svc-7n6n7 [1.378962538s]
Feb  6 13:35:45.744: INFO: Created: latency-svc-gttsd
Feb  6 13:35:45.751: INFO: Got endpoints: latency-svc-gttsd [1.483646439s]
Feb  6 13:35:45.803: INFO: Created: latency-svc-rp9m4
Feb  6 13:35:45.808: INFO: Got endpoints: latency-svc-rp9m4 [1.474570522s]
Feb  6 13:35:45.974: INFO: Created: latency-svc-5mhvn
Feb  6 13:35:45.996: INFO: Got endpoints: latency-svc-5mhvn [1.530447076s]
Feb  6 13:35:46.040: INFO: Created: latency-svc-zxj92
Feb  6 13:35:46.049: INFO: Got endpoints: latency-svc-zxj92 [1.550760374s]
Feb  6 13:35:46.163: INFO: Created: latency-svc-jb86b
Feb  6 13:35:46.171: INFO: Got endpoints: latency-svc-jb86b [1.525078618s]
Feb  6 13:35:46.207: INFO: Created: latency-svc-2kdfb
Feb  6 13:35:46.215: INFO: Got endpoints: latency-svc-2kdfb [1.473738103s]
Feb  6 13:35:46.332: INFO: Created: latency-svc-llb6n
Feb  6 13:35:46.340: INFO: Got endpoints: latency-svc-llb6n [1.516021464s]
Feb  6 13:35:46.382: INFO: Created: latency-svc-sd7vh
Feb  6 13:35:46.386: INFO: Got endpoints: latency-svc-sd7vh [1.550087667s]
Feb  6 13:35:46.434: INFO: Created: latency-svc-dskrr
Feb  6 13:35:46.444: INFO: Got endpoints: latency-svc-dskrr [1.445473878s]
Feb  6 13:35:46.583: INFO: Created: latency-svc-dbqxs
Feb  6 13:35:46.602: INFO: Got endpoints: latency-svc-dbqxs [1.578303853s]
Feb  6 13:35:46.763: INFO: Created: latency-svc-9v5sm
Feb  6 13:35:46.772: INFO: Got endpoints: latency-svc-9v5sm [1.593527433s]
Feb  6 13:35:46.821: INFO: Created: latency-svc-95vqz
Feb  6 13:35:46.831: INFO: Got endpoints: latency-svc-95vqz [1.613395765s]
Feb  6 13:35:46.920: INFO: Created: latency-svc-ql6dq
Feb  6 13:35:46.966: INFO: Got endpoints: latency-svc-ql6dq [1.640494135s]
Feb  6 13:35:46.971: INFO: Created: latency-svc-vprpf
Feb  6 13:35:46.977: INFO: Got endpoints: latency-svc-vprpf [1.358598221s]
Feb  6 13:35:47.020: INFO: Created: latency-svc-8njrw
Feb  6 13:35:47.106: INFO: Got endpoints: latency-svc-8njrw [1.484614985s]
Feb  6 13:35:47.178: INFO: Created: latency-svc-5qxns
Feb  6 13:35:47.194: INFO: Got endpoints: latency-svc-5qxns [1.442928701s]
Feb  6 13:35:47.288: INFO: Created: latency-svc-ln5r5
Feb  6 13:35:47.350: INFO: Got endpoints: latency-svc-ln5r5 [1.541903682s]
Feb  6 13:35:47.541: INFO: Created: latency-svc-86w8l
Feb  6 13:35:47.554: INFO: Got endpoints: latency-svc-86w8l [1.558099279s]
Feb  6 13:35:47.624: INFO: Created: latency-svc-qrwhx
Feb  6 13:35:47.624: INFO: Got endpoints: latency-svc-qrwhx [1.574643566s]
Feb  6 13:35:47.724: INFO: Created: latency-svc-jntj6
Feb  6 13:35:47.738: INFO: Got endpoints: latency-svc-jntj6 [1.567028843s]
Feb  6 13:35:47.801: INFO: Created: latency-svc-2hm85
Feb  6 13:35:47.801: INFO: Got endpoints: latency-svc-2hm85 [1.586004587s]
Feb  6 13:35:47.946: INFO: Created: latency-svc-mqjhb
Feb  6 13:35:47.968: INFO: Got endpoints: latency-svc-mqjhb [1.628580313s]
Feb  6 13:35:48.037: INFO: Created: latency-svc-2pldm
Feb  6 13:35:48.118: INFO: Got endpoints: latency-svc-2pldm [1.731954238s]
Feb  6 13:35:48.202: INFO: Created: latency-svc-8wz4j
Feb  6 13:35:48.206: INFO: Got endpoints: latency-svc-8wz4j [1.762166637s]
Feb  6 13:35:48.304: INFO: Created: latency-svc-sdkw7
Feb  6 13:35:48.358: INFO: Got endpoints: latency-svc-sdkw7 [1.755841783s]
Feb  6 13:35:48.370: INFO: Created: latency-svc-4vdct
Feb  6 13:35:48.376: INFO: Got endpoints: latency-svc-4vdct [1.604472283s]
Feb  6 13:35:48.514: INFO: Created: latency-svc-gp2qt
Feb  6 13:35:48.546: INFO: Got endpoints: latency-svc-gp2qt [1.715069894s]
Feb  6 13:35:48.597: INFO: Created: latency-svc-qbqjr
Feb  6 13:35:48.670: INFO: Got endpoints: latency-svc-qbqjr [1.70341264s]
Feb  6 13:35:48.705: INFO: Created: latency-svc-jbptf
Feb  6 13:35:48.724: INFO: Got endpoints: latency-svc-jbptf [1.746237009s]
Feb  6 13:35:48.784: INFO: Created: latency-svc-mvk7r
Feb  6 13:35:48.892: INFO: Got endpoints: latency-svc-mvk7r [1.786117771s]
Feb  6 13:35:48.907: INFO: Created: latency-svc-nvx6f
Feb  6 13:35:48.917: INFO: Got endpoints: latency-svc-nvx6f [1.722709708s]
Feb  6 13:35:48.968: INFO: Created: latency-svc-d9x2z
Feb  6 13:35:48.983: INFO: Got endpoints: latency-svc-d9x2z [1.632795218s]
Feb  6 13:35:49.195: INFO: Created: latency-svc-4c4zp
Feb  6 13:35:49.206: INFO: Got endpoints: latency-svc-4c4zp [1.651496417s]
Feb  6 13:35:49.341: INFO: Created: latency-svc-jlflg
Feb  6 13:35:49.358: INFO: Got endpoints: latency-svc-jlflg [1.73345618s]
Feb  6 13:35:49.409: INFO: Created: latency-svc-jl4w2
Feb  6 13:35:49.419: INFO: Got endpoints: latency-svc-jl4w2 [1.680807173s]
Feb  6 13:35:49.558: INFO: Created: latency-svc-bdqzs
Feb  6 13:35:49.572: INFO: Got endpoints: latency-svc-bdqzs [1.77148601s]
Feb  6 13:35:49.681: INFO: Created: latency-svc-w9xp8
Feb  6 13:35:49.691: INFO: Got endpoints: latency-svc-w9xp8 [1.72277592s]
Feb  6 13:35:49.746: INFO: Created: latency-svc-gfx7v
Feb  6 13:35:49.768: INFO: Got endpoints: latency-svc-gfx7v [1.649369976s]
Feb  6 13:35:49.877: INFO: Created: latency-svc-4xbhg
Feb  6 13:35:49.878: INFO: Got endpoints: latency-svc-4xbhg [1.6718693s]
Feb  6 13:35:50.052: INFO: Created: latency-svc-jvsml
Feb  6 13:35:50.125: INFO: Got endpoints: latency-svc-jvsml [1.766750572s]
Feb  6 13:35:50.130: INFO: Created: latency-svc-cfdrf
Feb  6 13:35:50.134: INFO: Got endpoints: latency-svc-cfdrf [1.758019154s]
Feb  6 13:35:50.296: INFO: Created: latency-svc-9t5n8
Feb  6 13:35:50.342: INFO: Got endpoints: latency-svc-9t5n8 [1.795235599s]
Feb  6 13:35:50.392: INFO: Created: latency-svc-c4snt
Feb  6 13:35:50.486: INFO: Got endpoints: latency-svc-c4snt [1.816199152s]
Feb  6 13:35:50.507: INFO: Created: latency-svc-lpjln
Feb  6 13:35:50.534: INFO: Got endpoints: latency-svc-lpjln [1.810067682s]
Feb  6 13:35:50.669: INFO: Created: latency-svc-2ld2w
Feb  6 13:35:50.676: INFO: Got endpoints: latency-svc-2ld2w [1.784001297s]
Feb  6 13:35:50.723: INFO: Created: latency-svc-7pnp6
Feb  6 13:35:50.724: INFO: Got endpoints: latency-svc-7pnp6 [1.806875526s]
Feb  6 13:35:50.928: INFO: Created: latency-svc-b6ldb
Feb  6 13:35:50.944: INFO: Got endpoints: latency-svc-b6ldb [1.960711498s]
Feb  6 13:35:51.017: INFO: Created: latency-svc-v8fq2
Feb  6 13:35:51.282: INFO: Got endpoints: latency-svc-v8fq2 [2.076123771s]
Feb  6 13:35:51.324: INFO: Created: latency-svc-4jf5j
Feb  6 13:35:51.335: INFO: Got endpoints: latency-svc-4jf5j [1.977295077s]
Feb  6 13:35:51.493: INFO: Created: latency-svc-8gc4j
Feb  6 13:35:51.499: INFO: Got endpoints: latency-svc-8gc4j [2.080486594s]
Feb  6 13:35:51.657: INFO: Created: latency-svc-dttxd
Feb  6 13:35:51.874: INFO: Got endpoints: latency-svc-dttxd [2.300960809s]
Feb  6 13:35:51.894: INFO: Created: latency-svc-bljss
Feb  6 13:35:51.898: INFO: Got endpoints: latency-svc-bljss [2.206784604s]
Feb  6 13:35:52.065: INFO: Created: latency-svc-4bd4x
Feb  6 13:35:52.071: INFO: Got endpoints: latency-svc-4bd4x [2.302890277s]
Feb  6 13:35:52.157: INFO: Created: latency-svc-zvwqz
Feb  6 13:35:52.239: INFO: Got endpoints: latency-svc-zvwqz [2.361059505s]
Feb  6 13:35:52.251: INFO: Created: latency-svc-52mlh
Feb  6 13:35:52.262: INFO: Got endpoints: latency-svc-52mlh [2.136874214s]
Feb  6 13:35:52.438: INFO: Created: latency-svc-qksll
Feb  6 13:35:52.467: INFO: Got endpoints: latency-svc-qksll [2.332444897s]
Feb  6 13:35:52.467: INFO: Created: latency-svc-xmhmw
Feb  6 13:35:52.481: INFO: Got endpoints: latency-svc-xmhmw [2.138839968s]
Feb  6 13:35:52.588: INFO: Created: latency-svc-kpkzd
Feb  6 13:35:52.599: INFO: Got endpoints: latency-svc-kpkzd [2.112023315s]
Feb  6 13:35:52.727: INFO: Created: latency-svc-jshw9
Feb  6 13:35:52.731: INFO: Got endpoints: latency-svc-jshw9 [2.196950333s]
Feb  6 13:35:52.782: INFO: Created: latency-svc-vgcfn
Feb  6 13:35:52.792: INFO: Got endpoints: latency-svc-vgcfn [2.115340607s]
Feb  6 13:35:52.910: INFO: Created: latency-svc-wq2j6
Feb  6 13:35:52.997: INFO: Got endpoints: latency-svc-wq2j6 [2.27289224s]
Feb  6 13:35:52.998: INFO: Created: latency-svc-qcwrl
Feb  6 13:35:53.002: INFO: Got endpoints: latency-svc-qcwrl [2.057378562s]
Feb  6 13:35:53.119: INFO: Created: latency-svc-c4ndf
Feb  6 13:35:53.138: INFO: Got endpoints: latency-svc-c4ndf [1.855224313s]
Feb  6 13:35:53.175: INFO: Created: latency-svc-whnhq
Feb  6 13:35:53.180: INFO: Got endpoints: latency-svc-whnhq [1.845197625s]
Feb  6 13:35:53.292: INFO: Created: latency-svc-smgkz
Feb  6 13:35:53.338: INFO: Got endpoints: latency-svc-smgkz [1.838416551s]
Feb  6 13:35:53.345: INFO: Created: latency-svc-wg62s
Feb  6 13:35:53.349: INFO: Got endpoints: latency-svc-wg62s [1.47491442s]
Feb  6 13:35:53.478: INFO: Created: latency-svc-tckp5
Feb  6 13:35:53.481: INFO: Got endpoints: latency-svc-tckp5 [1.583233756s]
Feb  6 13:35:53.541: INFO: Created: latency-svc-hd62r
Feb  6 13:35:53.542: INFO: Got endpoints: latency-svc-hd62r [1.471274199s]
Feb  6 13:35:53.675: INFO: Created: latency-svc-2p586
Feb  6 13:35:53.697: INFO: Got endpoints: latency-svc-2p586 [1.457420077s]
Feb  6 13:35:53.733: INFO: Created: latency-svc-v98ff
Feb  6 13:35:53.748: INFO: Got endpoints: latency-svc-v98ff [1.485835725s]
Feb  6 13:35:53.894: INFO: Created: latency-svc-dljl2
Feb  6 13:35:53.925: INFO: Got endpoints: latency-svc-dljl2 [1.458323295s]
Feb  6 13:35:53.933: INFO: Created: latency-svc-jxmvb
Feb  6 13:35:53.941: INFO: Got endpoints: latency-svc-jxmvb [1.459759502s]
Feb  6 13:35:54.698: INFO: Created: latency-svc-7mc88
Feb  6 13:35:54.717: INFO: Got endpoints: latency-svc-7mc88 [2.118164334s]
Feb  6 13:35:54.757: INFO: Created: latency-svc-xkn6n
Feb  6 13:35:54.767: INFO: Got endpoints: latency-svc-xkn6n [2.036071742s]
Feb  6 13:35:54.796: INFO: Created: latency-svc-x62xq
Feb  6 13:35:54.899: INFO: Got endpoints: latency-svc-x62xq [2.107250397s]
Feb  6 13:35:54.924: INFO: Created: latency-svc-b2nwc
Feb  6 13:35:54.930: INFO: Got endpoints: latency-svc-b2nwc [1.932527578s]
Feb  6 13:35:55.132: INFO: Created: latency-svc-9zshl
Feb  6 13:35:55.142: INFO: Got endpoints: latency-svc-9zshl [2.140498673s]
Feb  6 13:35:55.407: INFO: Created: latency-svc-9jb57
Feb  6 13:35:55.416: INFO: Got endpoints: latency-svc-9jb57 [2.278520449s]
Feb  6 13:35:55.952: INFO: Created: latency-svc-87kzn
Feb  6 13:35:55.974: INFO: Got endpoints: latency-svc-87kzn [2.793672402s]
Feb  6 13:35:56.257: INFO: Created: latency-svc-qs7l8
Feb  6 13:35:56.262: INFO: Got endpoints: latency-svc-qs7l8 [2.924094516s]
Feb  6 13:35:56.305: INFO: Created: latency-svc-mgrhf
Feb  6 13:35:56.313: INFO: Got endpoints: latency-svc-mgrhf [2.964695515s]
Feb  6 13:35:56.454: INFO: Created: latency-svc-zmtgz
Feb  6 13:35:56.487: INFO: Got endpoints: latency-svc-zmtgz [3.005246933s]
Feb  6 13:35:56.541: INFO: Created: latency-svc-6v7b5
Feb  6 13:35:56.641: INFO: Got endpoints: latency-svc-6v7b5 [3.098941772s]
Feb  6 13:35:56.664: INFO: Created: latency-svc-5rxjf
Feb  6 13:35:56.679: INFO: Got endpoints: latency-svc-5rxjf [2.981763279s]
Feb  6 13:35:56.716: INFO: Created: latency-svc-r8cm9
Feb  6 13:35:56.886: INFO: Got endpoints: latency-svc-r8cm9 [3.137474998s]
Feb  6 13:35:56.905: INFO: Created: latency-svc-tc82s
Feb  6 13:35:56.917: INFO: Got endpoints: latency-svc-tc82s [2.992055124s]
Feb  6 13:35:57.046: INFO: Created: latency-svc-hgw78
Feb  6 13:35:57.076: INFO: Got endpoints: latency-svc-hgw78 [3.135007564s]
Feb  6 13:35:57.139: INFO: Created: latency-svc-pj2db
Feb  6 13:35:57.214: INFO: Got endpoints: latency-svc-pj2db [2.496536995s]
Feb  6 13:35:57.261: INFO: Created: latency-svc-4d2dx
Feb  6 13:35:57.264: INFO: Got endpoints: latency-svc-4d2dx [2.496176553s]
Feb  6 13:35:57.299: INFO: Created: latency-svc-d2mb6
Feb  6 13:35:57.300: INFO: Got endpoints: latency-svc-d2mb6 [2.400244429s]
Feb  6 13:35:57.402: INFO: Created: latency-svc-q96dz
Feb  6 13:35:57.453: INFO: Created: latency-svc-wndtm
Feb  6 13:35:57.461: INFO: Got endpoints: latency-svc-q96dz [2.531692321s]
Feb  6 13:35:57.478: INFO: Got endpoints: latency-svc-wndtm [2.335744602s]
Feb  6 13:35:57.605: INFO: Created: latency-svc-5lfdb
Feb  6 13:35:57.611: INFO: Got endpoints: latency-svc-5lfdb [2.195086154s]
Feb  6 13:35:57.668: INFO: Created: latency-svc-zgdzn
Feb  6 13:35:57.669: INFO: Got endpoints: latency-svc-zgdzn [1.694649629s]
Feb  6 13:35:57.769: INFO: Created: latency-svc-grm5z
Feb  6 13:35:57.770: INFO: Got endpoints: latency-svc-grm5z [1.507393275s]
Feb  6 13:35:57.814: INFO: Created: latency-svc-9mwmd
Feb  6 13:35:57.823: INFO: Got endpoints: latency-svc-9mwmd [1.508863768s]
Feb  6 13:35:57.852: INFO: Created: latency-svc-89f6m
Feb  6 13:35:57.931: INFO: Got endpoints: latency-svc-89f6m [1.443848729s]
Feb  6 13:35:57.962: INFO: Created: latency-svc-vxkhr
Feb  6 13:35:57.971: INFO: Got endpoints: latency-svc-vxkhr [1.328889877s]
Feb  6 13:35:58.017: INFO: Created: latency-svc-qc8hl
Feb  6 13:35:58.167: INFO: Got endpoints: latency-svc-qc8hl [1.488232195s]
Feb  6 13:35:58.248: INFO: Created: latency-svc-4vzx9
Feb  6 13:35:58.254: INFO: Got endpoints: latency-svc-4vzx9 [1.368468436s]
Feb  6 13:35:58.370: INFO: Created: latency-svc-rrvtv
Feb  6 13:35:58.378: INFO: Got endpoints: latency-svc-rrvtv [1.460248585s]
Feb  6 13:35:58.415: INFO: Created: latency-svc-926tg
Feb  6 13:35:58.421: INFO: Got endpoints: latency-svc-926tg [1.345188766s]
Feb  6 13:35:58.572: INFO: Created: latency-svc-6ms7x
Feb  6 13:35:58.594: INFO: Got endpoints: latency-svc-6ms7x [1.379768828s]
Feb  6 13:35:58.669: INFO: Created: latency-svc-9g4xq
Feb  6 13:35:58.734: INFO: Got endpoints: latency-svc-9g4xq [1.470120816s]
Feb  6 13:35:58.764: INFO: Created: latency-svc-98p5h
Feb  6 13:35:58.771: INFO: Got endpoints: latency-svc-98p5h [1.471187835s]
Feb  6 13:35:58.810: INFO: Created: latency-svc-t6zwf
Feb  6 13:35:58.818: INFO: Got endpoints: latency-svc-t6zwf [1.356718289s]
Feb  6 13:35:58.950: INFO: Created: latency-svc-xw87k
Feb  6 13:35:58.958: INFO: Got endpoints: latency-svc-xw87k [1.479232533s]
Feb  6 13:35:59.002: INFO: Created: latency-svc-8lclr
Feb  6 13:35:59.113: INFO: Got endpoints: latency-svc-8lclr [1.501450915s]
Feb  6 13:35:59.187: INFO: Created: latency-svc-sktgv
Feb  6 13:35:59.202: INFO: Got endpoints: latency-svc-sktgv [1.533095364s]
Feb  6 13:35:59.304: INFO: Created: latency-svc-wlfmh
Feb  6 13:35:59.321: INFO: Got endpoints: latency-svc-wlfmh [1.551099458s]
Feb  6 13:35:59.371: INFO: Created: latency-svc-b6xnc
Feb  6 13:35:59.478: INFO: Got endpoints: latency-svc-b6xnc [1.655169036s]
Feb  6 13:35:59.496: INFO: Created: latency-svc-s4c25
Feb  6 13:35:59.513: INFO: Got endpoints: latency-svc-s4c25 [1.582361088s]
Feb  6 13:35:59.546: INFO: Created: latency-svc-shnnh
Feb  6 13:35:59.561: INFO: Got endpoints: latency-svc-shnnh [1.589978033s]
Feb  6 13:35:59.650: INFO: Created: latency-svc-l457b
Feb  6 13:35:59.670: INFO: Got endpoints: latency-svc-l457b [1.502087175s]
Feb  6 13:35:59.670: INFO: Latencies: [212.327771ms 229.261845ms 263.550011ms 411.820341ms 468.496787ms 610.031903ms 641.089349ms 679.360839ms 787.401412ms 824.245544ms 861.996086ms 917.869386ms 926.53878ms 927.819346ms 951.236805ms 983.361753ms 989.831689ms 1.000231209s 1.006987108s 1.010210364s 1.022969009s 1.024195982s 1.027304071s 1.03634664s 1.039615678s 1.062120963s 1.071316161s 1.124915201s 1.142894185s 1.172946469s 1.183332215s 1.190948884s 1.232666216s 1.250228918s 1.278303501s 1.298241985s 1.303656509s 1.306839848s 1.314307009s 1.322761342s 1.328889877s 1.345188766s 1.34525304s 1.352670076s 1.356718289s 1.358598221s 1.358819301s 1.368468436s 1.372425785s 1.374636018s 1.378962538s 1.379768828s 1.380170513s 1.388523779s 1.406748194s 1.413054353s 1.418240807s 1.426625612s 1.431206556s 1.442928701s 1.443848729s 1.445473878s 1.457420077s 1.458323295s 1.459759502s 1.460248585s 1.466948241s 1.470120816s 1.471187835s 1.471274199s 1.473738103s 1.474570522s 1.47491442s 1.479232533s 1.483646439s 1.484100901s 1.484614985s 1.485835725s 1.488232195s 1.497834679s 1.501450915s 1.502087175s 1.504812236s 1.506963468s 1.507393275s 1.508863768s 1.516021464s 1.521460418s 1.525078618s 1.530447076s 1.533095364s 1.540607842s 1.541903682s 1.545776398s 1.550087667s 1.550760374s 1.551099458s 1.558099279s 1.55812191s 1.565072844s 1.567028843s 1.573798254s 1.574643566s 1.578303853s 1.580988603s 1.582361088s 1.582596888s 1.582765485s 1.583233756s 1.585698583s 1.586004587s 1.589978033s 1.593527433s 1.604472283s 1.611663139s 1.613395765s 1.628580313s 1.632795218s 1.636328537s 1.640494135s 1.649369976s 1.651496417s 1.655169036s 1.6718693s 1.675550364s 1.680807173s 1.694649629s 1.70341264s 1.704292739s 1.705086038s 1.715069894s 1.722709708s 1.72277592s 1.731954238s 1.73345618s 1.746237009s 1.755841783s 1.758019154s 1.762166637s 1.766750572s 1.77148601s 1.784001297s 1.786117771s 1.795235599s 1.79813937s 1.806875526s 1.810067682s 1.811852496s 1.816199152s 1.838416551s 1.844909906s 1.845197625s 1.855224313s 1.864121285s 1.866608881s 1.872694383s 1.911885133s 1.919184122s 1.926751907s 1.932527578s 1.941170405s 1.95584937s 1.960711498s 1.970770866s 1.977295077s 1.988607646s 2.036071742s 2.057378562s 2.076123771s 2.080486594s 2.107250397s 2.112023315s 2.115340607s 2.118164334s 2.136874214s 2.138839968s 2.140498673s 2.195086154s 2.196950333s 2.206784604s 2.27289224s 2.278520449s 2.300960809s 2.302890277s 2.332444897s 2.335744602s 2.361059505s 2.400244429s 2.496176553s 2.496536995s 2.531692321s 2.793672402s 2.924094516s 2.964695515s 2.981763279s 2.992055124s 3.005246933s 3.098941772s 3.135007564s 3.137474998s]
Feb  6 13:35:59.670: INFO: 50 %ile: 1.567028843s
Feb  6 13:35:59.670: INFO: 90 %ile: 2.27289224s
Feb  6 13:35:59.670: INFO: 99 %ile: 3.135007564s
Feb  6 13:35:59.670: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:35:59.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8484" for this suite.
Feb  6 13:36:37.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:36:37.996: INFO: namespace svc-latency-8484 deletion completed in 38.318052455s

• [SLOW TEST:68.917 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:36:37.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-01728daa-13b5-456f-b48e-164dbf333503
STEP: Creating a pod to test consume secrets
Feb  6 13:36:38.150: INFO: Waiting up to 5m0s for pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755" in namespace "secrets-4461" to be "success or failure"
Feb  6 13:36:38.169: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Pending", Reason="", readiness=false. Elapsed: 18.908074ms
Feb  6 13:36:40.175: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025813333s
Feb  6 13:36:42.191: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041508s
Feb  6 13:36:44.209: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059665916s
Feb  6 13:36:46.217: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067663741s
Feb  6 13:36:48.225: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075274999s
STEP: Saw pod success
Feb  6 13:36:48.225: INFO: Pod "pod-secrets-50649db1-a316-4098-8d76-3507d02bb755" satisfied condition "success or failure"
Feb  6 13:36:48.229: INFO: Trying to get logs from node iruya-node pod pod-secrets-50649db1-a316-4098-8d76-3507d02bb755 container secret-env-test: 
STEP: delete the pod
Feb  6 13:36:48.405: INFO: Waiting for pod pod-secrets-50649db1-a316-4098-8d76-3507d02bb755 to disappear
Feb  6 13:36:48.433: INFO: Pod pod-secrets-50649db1-a316-4098-8d76-3507d02bb755 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:36:48.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4461" for this suite.
Feb  6 13:36:54.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:36:54.603: INFO: namespace secrets-4461 deletion completed in 6.160576265s

• [SLOW TEST:16.607 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:36:54.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  6 13:36:54.775: INFO: Waiting up to 5m0s for pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f" in namespace "containers-1899" to be "success or failure"
Feb  6 13:36:54.786: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.023658ms
Feb  6 13:36:56.795: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020682897s
Feb  6 13:36:58.803: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028184981s
Feb  6 13:37:00.828: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05311364s
Feb  6 13:37:02.835: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06030984s
STEP: Saw pod success
Feb  6 13:37:02.835: INFO: Pod "client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f" satisfied condition "success or failure"
Feb  6 13:37:02.841: INFO: Trying to get logs from node iruya-node pod client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f container test-container: 
STEP: delete the pod
Feb  6 13:37:02.963: INFO: Waiting for pod client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f to disappear
Feb  6 13:37:02.993: INFO: Pod client-containers-7526caeb-ca93-4b0e-ab5c-c04c165e067f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:37:02.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1899" for this suite.
Feb  6 13:37:09.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:37:09.227: INFO: namespace containers-1899 deletion completed in 6.191646587s

• [SLOW TEST:14.624 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:37:09.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:37:20.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5517" for this suite.
Feb  6 13:37:42.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:37:42.635: INFO: namespace replication-controller-5517 deletion completed in 22.196271721s

• [SLOW TEST:33.407 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:37:42.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:37:43.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2526" for this suite.
Feb  6 13:37:49.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:37:49.472: INFO: namespace services-2526 deletion completed in 6.316333136s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.836 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:37:49.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 13:37:49.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-1446'
Feb  6 13:37:49.749: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 13:37:49.749: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  6 13:37:51.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1446'
Feb  6 13:37:51.970: INFO: stderr: ""
Feb  6 13:37:51.970: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:37:51.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1446" for this suite.
Feb  6 13:37:58.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:37:58.183: INFO: namespace kubectl-1446 deletion completed in 6.205947177s

• [SLOW TEST:8.711 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:37:58.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0206 13:38:40.380360       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 13:38:40.380: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:38:40.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6612" for this suite.
Feb  6 13:38:52.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:38:56.106: INFO: namespace gc-6612 deletion completed in 15.716160054s

• [SLOW TEST:57.923 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:38:56.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  6 13:38:57.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7786'
Feb  6 13:38:58.504: INFO: stderr: ""
Feb  6 13:38:58.504: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 13:38:58.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7786'
Feb  6 13:38:59.653: INFO: stderr: ""
Feb  6 13:38:59.653: INFO: stdout: "update-demo-nautilus-n2jxh "
STEP: Replicas for name=update-demo: expected=2 actual=1
Feb  6 13:39:04.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7786'
Feb  6 13:39:04.791: INFO: stderr: ""
Feb  6 13:39:04.791: INFO: stdout: "update-demo-nautilus-297pp update-demo-nautilus-n2jxh "
Feb  6 13:39:04.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-297pp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:04.880: INFO: stderr: ""
Feb  6 13:39:04.880: INFO: stdout: ""
Feb  6 13:39:04.880: INFO: update-demo-nautilus-297pp is created but not running
Feb  6 13:39:09.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7786'
Feb  6 13:39:10.066: INFO: stderr: ""
Feb  6 13:39:10.066: INFO: stdout: "update-demo-nautilus-297pp update-demo-nautilus-n2jxh "
Feb  6 13:39:10.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-297pp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:10.246: INFO: stderr: ""
Feb  6 13:39:10.246: INFO: stdout: ""
Feb  6 13:39:10.246: INFO: update-demo-nautilus-297pp is created but not running
Feb  6 13:39:15.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7786'
Feb  6 13:39:15.419: INFO: stderr: ""
Feb  6 13:39:15.419: INFO: stdout: "update-demo-nautilus-297pp update-demo-nautilus-n2jxh "
Feb  6 13:39:15.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-297pp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:15.504: INFO: stderr: ""
Feb  6 13:39:15.504: INFO: stdout: "true"
Feb  6 13:39:15.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-297pp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:15.610: INFO: stderr: ""
Feb  6 13:39:15.610: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 13:39:15.610: INFO: validating pod update-demo-nautilus-297pp
Feb  6 13:39:15.620: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 13:39:15.620: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 13:39:15.620: INFO: update-demo-nautilus-297pp is verified up and running
Feb  6 13:39:15.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n2jxh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:15.727: INFO: stderr: ""
Feb  6 13:39:15.727: INFO: stdout: "true"
Feb  6 13:39:15.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n2jxh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7786'
Feb  6 13:39:15.837: INFO: stderr: ""
Feb  6 13:39:15.837: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 13:39:15.837: INFO: validating pod update-demo-nautilus-n2jxh
Feb  6 13:39:15.867: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 13:39:15.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 13:39:15.867: INFO: update-demo-nautilus-n2jxh is verified up and running
STEP: using delete to clean up resources
Feb  6 13:39:15.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7786'
Feb  6 13:39:15.984: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:39:15.985: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  6 13:39:15.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7786'
Feb  6 13:39:16.121: INFO: stderr: "No resources found.\n"
Feb  6 13:39:16.121: INFO: stdout: ""
Feb  6 13:39:16.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7786 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 13:39:16.253: INFO: stderr: ""
Feb  6 13:39:16.253: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:39:16.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7786" for this suite.
Feb  6 13:39:38.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:39:38.433: INFO: namespace kubectl-7786 deletion completed in 22.175668732s

• [SLOW TEST:42.326 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:39:38.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-ad0e7017-7f0a-4e9b-9238-f31afd6a5f8d in namespace container-probe-2076
Feb  6 13:39:46.603: INFO: Started pod busybox-ad0e7017-7f0a-4e9b-9238-f31afd6a5f8d in namespace container-probe-2076
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 13:39:46.612: INFO: Initial restart count of pod busybox-ad0e7017-7f0a-4e9b-9238-f31afd6a5f8d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:43:46.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2076" for this suite.
Feb  6 13:43:52.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:43:53.137: INFO: namespace container-probe-2076 deletion completed in 6.194993988s

• [SLOW TEST:254.704 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:43:53.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-60034cba-735f-48ca-acca-0cf56710f3ca
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-60034cba-735f-48ca-acca-0cf56710f3ca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:44:03.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3801" for this suite.
Feb  6 13:44:25.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:44:25.603: INFO: namespace projected-3801 deletion completed in 22.138278228s

• [SLOW TEST:32.465 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:44:25.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-2ebaa038-3cc4-41d4-9a6b-26da29e27aab
STEP: Creating configMap with name cm-test-opt-upd-af29d66c-a409-4ee2-840a-6e19bbc90eb5
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-2ebaa038-3cc4-41d4-9a6b-26da29e27aab
STEP: Updating configmap cm-test-opt-upd-af29d66c-a409-4ee2-840a-6e19bbc90eb5
STEP: Creating configMap with name cm-test-opt-create-fbcca4ef-a537-45e7-8132-923b0cc108da
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:45:59.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-13" for this suite.
Feb  6 13:46:21.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:46:21.961: INFO: namespace projected-13 deletion completed in 22.196457703s

• [SLOW TEST:116.358 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:46:21.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  6 13:46:34.676: INFO: Successfully updated pod "labelsupdatec753bdd2-f3ee-462c-a79a-bd8eaec16f29"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:46:36.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1572" for this suite.
Feb  6 13:47:00.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:47:00.953: INFO: namespace downward-api-1572 deletion completed in 24.143737026s

• [SLOW TEST:38.992 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:47:00.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  6 13:47:01.071: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 13:47:01.084: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 13:47:01.086: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  6 13:47:01.101: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.101: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 13:47:01.101: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  6 13:47:01.101: INFO: 	Container weave ready: true, restart count 0
Feb  6 13:47:01.101: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 13:47:01.101: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  6 13:47:01.116: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container weave ready: true, restart count 0
Feb  6 13:47:01.116: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 13:47:01.116: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container coredns ready: true, restart count 0
Feb  6 13:47:01.116: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container etcd ready: true, restart count 0
Feb  6 13:47:01.116: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 13:47:01.116: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  6 13:47:01.116: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  6 13:47:01.116: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container coredns ready: true, restart count 0
Feb  6 13:47:01.116: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  6 13:47:01.116: INFO: 	Container kube-scheduler ready: true, restart count 13
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f14e4fc7-f6db-4389-a0a1-0d53168b68c1 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f14e4fc7-f6db-4389-a0a1-0d53168b68c1 off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f14e4fc7-f6db-4389-a0a1-0d53168b68c1
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:47:23.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9141" for this suite.
Feb  6 13:47:43.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:47:43.550: INFO: namespace sched-pred-9141 deletion completed in 20.164449804s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:42.597 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:47:43.551: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  6 13:47:43.705: INFO: Waiting up to 5m0s for pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501" in namespace "var-expansion-2930" to be "success or failure"
Feb  6 13:47:43.735: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Pending", Reason="", readiness=false. Elapsed: 29.144948ms
Feb  6 13:47:45.747: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040977305s
Feb  6 13:47:47.753: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047706377s
Feb  6 13:47:49.765: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059709065s
Feb  6 13:47:51.776: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070313092s
Feb  6 13:47:53.798: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092381041s
STEP: Saw pod success
Feb  6 13:47:53.798: INFO: Pod "var-expansion-1430e233-222c-48de-9a72-90be0e882501" satisfied condition "success or failure"
Feb  6 13:47:53.824: INFO: Trying to get logs from node iruya-node pod var-expansion-1430e233-222c-48de-9a72-90be0e882501 container dapi-container: 
STEP: delete the pod
Feb  6 13:47:53.928: INFO: Waiting for pod var-expansion-1430e233-222c-48de-9a72-90be0e882501 to disappear
Feb  6 13:47:53.941: INFO: Pod var-expansion-1430e233-222c-48de-9a72-90be0e882501 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:47:53.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2930" for this suite.
Feb  6 13:48:00.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:48:00.201: INFO: namespace var-expansion-2930 deletion completed in 6.174398565s

• [SLOW TEST:16.650 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:48:00.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:48:00.338: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b0bc70f9-18e7-46d5-b83f-6fae9a4dc615", Controller:(*bool)(0xc001af37fa), BlockOwnerDeletion:(*bool)(0xc001af37fb)}}
Feb  6 13:48:00.430: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dc53002a-f557-4e77-937a-7ac34beaa31c", Controller:(*bool)(0xc002d5d04a), BlockOwnerDeletion:(*bool)(0xc002d5d04b)}}
Feb  6 13:48:00.473: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"840d01f6-38f8-431a-a304-6a78c5f4b83d", Controller:(*bool)(0xc00311979a), BlockOwnerDeletion:(*bool)(0xc00311979b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:48:05.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1025" for this suite.
Feb  6 13:48:11.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:48:11.683: INFO: namespace gc-1025 deletion completed in 6.188029376s

• [SLOW TEST:11.482 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:48:11.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  6 13:48:19.841: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  6 13:48:40.025: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:48:40.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9484" for this suite.
Feb  6 13:48:46.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:48:46.251: INFO: namespace pods-9484 deletion completed in 6.214547514s

• [SLOW TEST:34.568 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:48:46.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:48:46.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3" in namespace "projected-1327" to be "success or failure"
Feb  6 13:48:46.352: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 9.502102ms
Feb  6 13:48:48.361: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018372562s
Feb  6 13:48:50.367: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024622881s
Feb  6 13:48:52.375: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032735773s
Feb  6 13:48:54.381: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038371729s
Feb  6 13:48:56.389: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047010177s
STEP: Saw pod success
Feb  6 13:48:56.389: INFO: Pod "downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3" satisfied condition "success or failure"
Feb  6 13:48:56.393: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3 container client-container: 
STEP: delete the pod
Feb  6 13:48:56.554: INFO: Waiting for pod downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3 to disappear
Feb  6 13:48:56.570: INFO: Pod downwardapi-volume-4835c354-397b-46ba-acf5-42923cc32fa3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:48:56.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1327" for this suite.
Feb  6 13:49:02.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:49:02.776: INFO: namespace projected-1327 deletion completed in 6.197279677s

• [SLOW TEST:16.524 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:49:02.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 13:49:02.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6617'
Feb  6 13:49:04.578: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 13:49:04.578: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  6 13:49:06.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-6617'
Feb  6 13:49:06.939: INFO: stderr: ""
Feb  6 13:49:06.939: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:49:06.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6617" for this suite.
Feb  6 13:49:13.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:49:13.153: INFO: namespace kubectl-6617 deletion completed in 6.199851702s

• [SLOW TEST:10.376 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:49:13.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-22cc7dd1-6b68-45ac-97de-239aaffe97ef
STEP: Creating a pod to test consume configMaps
Feb  6 13:49:13.257: INFO: Waiting up to 5m0s for pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2" in namespace "configmap-5668" to be "success or failure"
Feb  6 13:49:13.288: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Pending", Reason="", readiness=false. Elapsed: 30.558944ms
Feb  6 13:49:15.299: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041899507s
Feb  6 13:49:17.310: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053378938s
Feb  6 13:49:19.326: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068688229s
Feb  6 13:49:21.336: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079389639s
Feb  6 13:49:23.346: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.089347939s
STEP: Saw pod success
Feb  6 13:49:23.346: INFO: Pod "pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2" satisfied condition "success or failure"
Feb  6 13:49:23.353: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2 container configmap-volume-test: 
STEP: delete the pod
Feb  6 13:49:23.693: INFO: Waiting for pod pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2 to disappear
Feb  6 13:49:23.706: INFO: Pod pod-configmaps-d952f607-cf1f-41aa-8172-29ca4f6556b2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:49:23.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5668" for this suite.
Feb  6 13:49:29.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:49:30.047: INFO: namespace configmap-5668 deletion completed in 6.333454597s

• [SLOW TEST:16.894 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:49:30.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  6 13:49:30.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3187'
Feb  6 13:49:30.635: INFO: stderr: ""
Feb  6 13:49:30.635: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 13:49:30.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3187'
Feb  6 13:49:30.785: INFO: stderr: ""
Feb  6 13:49:30.785: INFO: stdout: "update-demo-nautilus-bv4r7 update-demo-nautilus-bx8r6 "
Feb  6 13:49:30.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv4r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:30.948: INFO: stderr: ""
Feb  6 13:49:30.948: INFO: stdout: ""
Feb  6 13:49:30.948: INFO: update-demo-nautilus-bv4r7 is created but not running
Feb  6 13:49:35.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3187'
Feb  6 13:49:36.137: INFO: stderr: ""
Feb  6 13:49:36.137: INFO: stdout: "update-demo-nautilus-bv4r7 update-demo-nautilus-bx8r6 "
Feb  6 13:49:36.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv4r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:38.199: INFO: stderr: ""
Feb  6 13:49:38.199: INFO: stdout: ""
Feb  6 13:49:38.199: INFO: update-demo-nautilus-bv4r7 is created but not running
Feb  6 13:49:43.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3187'
Feb  6 13:49:43.454: INFO: stderr: ""
Feb  6 13:49:43.454: INFO: stdout: "update-demo-nautilus-bv4r7 update-demo-nautilus-bx8r6 "
Feb  6 13:49:43.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv4r7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:43.541: INFO: stderr: ""
Feb  6 13:49:43.541: INFO: stdout: "true"
Feb  6 13:49:43.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bv4r7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:43.634: INFO: stderr: ""
Feb  6 13:49:43.634: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 13:49:43.634: INFO: validating pod update-demo-nautilus-bv4r7
Feb  6 13:49:43.643: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 13:49:43.643: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 13:49:43.643: INFO: update-demo-nautilus-bv4r7 is verified up and running
Feb  6 13:49:43.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bx8r6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:43.761: INFO: stderr: ""
Feb  6 13:49:43.761: INFO: stdout: "true"
Feb  6 13:49:43.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bx8r6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:49:43.890: INFO: stderr: ""
Feb  6 13:49:43.890: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 13:49:43.890: INFO: validating pod update-demo-nautilus-bx8r6
Feb  6 13:49:43.921: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 13:49:43.921: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 13:49:43.921: INFO: update-demo-nautilus-bx8r6 is verified up and running
STEP: rolling-update to new replication controller
Feb  6 13:49:43.924: INFO: scanned /root for discovery docs: 
Feb  6 13:49:43.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3187'
Feb  6 13:50:15.430: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  6 13:50:15.430: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 13:50:15.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3187'
Feb  6 13:50:15.601: INFO: stderr: ""
Feb  6 13:50:15.601: INFO: stdout: "update-demo-kitten-cc7dl update-demo-kitten-lbmdf update-demo-nautilus-bx8r6 "
STEP: Replicas for name=update-demo: expected=2 actual=3
Feb  6 13:50:20.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3187'
Feb  6 13:50:20.746: INFO: stderr: ""
Feb  6 13:50:20.746: INFO: stdout: "update-demo-kitten-cc7dl update-demo-kitten-lbmdf "
Feb  6 13:50:20.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cc7dl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:50:20.881: INFO: stderr: ""
Feb  6 13:50:20.881: INFO: stdout: "true"
Feb  6 13:50:20.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-cc7dl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:50:21.010: INFO: stderr: ""
Feb  6 13:50:21.010: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  6 13:50:21.010: INFO: validating pod update-demo-kitten-cc7dl
Feb  6 13:50:21.028: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  6 13:50:21.028: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  6 13:50:21.028: INFO: update-demo-kitten-cc7dl is verified up and running
Feb  6 13:50:21.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lbmdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:50:21.165: INFO: stderr: ""
Feb  6 13:50:21.165: INFO: stdout: "true"
Feb  6 13:50:21.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lbmdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3187'
Feb  6 13:50:21.233: INFO: stderr: ""
Feb  6 13:50:21.233: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  6 13:50:21.233: INFO: validating pod update-demo-kitten-lbmdf
Feb  6 13:50:21.253: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  6 13:50:21.253: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  6 13:50:21.253: INFO: update-demo-kitten-lbmdf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:50:21.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3187" for this suite.
Feb  6 13:50:45.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:50:45.435: INFO: namespace kubectl-3187 deletion completed in 24.177930308s

• [SLOW TEST:75.387 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:50:45.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:50:45.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8420'
Feb  6 13:50:45.941: INFO: stderr: ""
Feb  6 13:50:45.941: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  6 13:50:45.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8420'
Feb  6 13:50:46.446: INFO: stderr: ""
Feb  6 13:50:46.446: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  6 13:50:47.462: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:47.462: INFO: Found 0 / 1
Feb  6 13:50:48.476: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:48.476: INFO: Found 0 / 1
Feb  6 13:50:49.461: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:49.461: INFO: Found 0 / 1
Feb  6 13:50:50.456: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:50.456: INFO: Found 0 / 1
Feb  6 13:50:51.455: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:51.455: INFO: Found 0 / 1
Feb  6 13:50:52.455: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:52.455: INFO: Found 0 / 1
Feb  6 13:50:53.456: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:53.456: INFO: Found 0 / 1
Feb  6 13:50:54.458: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:54.458: INFO: Found 1 / 1
Feb  6 13:50:54.458: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  6 13:50:54.466: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 13:50:54.466: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  6 13:50:54.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-zqblk --namespace=kubectl-8420'
Feb  6 13:50:54.603: INFO: stderr: ""
Feb  6 13:50:54.603: INFO: stdout: "Name:           redis-master-zqblk\nNamespace:      kubectl-8420\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Thu, 06 Feb 2020 13:50:46 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://6352269dda80e2facbff78ffa692a7d0e1f77e695049e2df272a6910eb151bcd\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 06 Feb 2020 13:50:52 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5k9t6 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-5k9t6:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-5k9t6\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-8420/redis-master-zqblk to iruya-node\n  Normal  Pulled     4s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-node  Started container redis-master\n"
Feb  6 13:50:54.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8420'
Feb  6 13:50:54.759: INFO: stderr: ""
Feb  6 13:50:54.759: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8420\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-zqblk\n"
Feb  6 13:50:54.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8420'
Feb  6 13:50:54.862: INFO: stderr: ""
Feb  6 13:50:54.862: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8420\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.111.110.33\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  6 13:50:54.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  6 13:50:54.986: INFO: stderr: ""
Feb  6 13:50:54.986: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Thu, 06 Feb 2020 13:50:39 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 06 Feb 2020 13:50:39 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 06 Feb 2020 13:50:39 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 06 Feb 2020 13:50:39 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         186d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         117d\n  kubectl-8420               redis-master-zqblk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  6 13:50:54.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8420'
Feb  6 13:50:55.104: INFO: stderr: ""
Feb  6 13:50:55.104: INFO: stdout: "Name:         kubectl-8420\nLabels:       e2e-framework=kubectl\n              e2e-run=8ec36a74-021e-4f2a-a3db-95518977ef3a\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:50:55.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8420" for this suite.
Feb  6 13:51:17.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:51:17.296: INFO: namespace kubectl-8420 deletion completed in 22.185859302s

• [SLOW TEST:31.860 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:51:17.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-7da94f7a-286d-47b3-b3cf-02705ce64800
STEP: Creating a pod to test consume secrets
Feb  6 13:51:17.412: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70" in namespace "projected-5922" to be "success or failure"
Feb  6 13:51:17.431: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Pending", Reason="", readiness=false. Elapsed: 19.444771ms
Feb  6 13:51:19.439: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027209284s
Feb  6 13:51:21.449: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036981236s
Feb  6 13:51:23.460: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048385191s
Feb  6 13:51:25.475: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063079854s
Feb  6 13:51:27.482: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070203202s
STEP: Saw pod success
Feb  6 13:51:27.482: INFO: Pod "pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70" satisfied condition "success or failure"
Feb  6 13:51:27.488: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 13:51:28.234: INFO: Waiting for pod pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70 to disappear
Feb  6 13:51:28.245: INFO: Pod pod-projected-secrets-5729d2c3-d9e4-4485-ad8f-2b3a7c6fbf70 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:51:28.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5922" for this suite.
Feb  6 13:51:36.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:51:36.428: INFO: namespace projected-5922 deletion completed in 8.175378449s

• [SLOW TEST:19.132 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:51:36.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:51:36.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101" in namespace "downward-api-8525" to be "success or failure"
Feb  6 13:51:36.510: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780743ms
Feb  6 13:51:38.683: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17543304s
Feb  6 13:51:40.704: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196458023s
Feb  6 13:51:42.717: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209776938s
Feb  6 13:51:44.734: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226458808s
Feb  6 13:51:46.743: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.235266574s
STEP: Saw pod success
Feb  6 13:51:46.743: INFO: Pod "downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101" satisfied condition "success or failure"
Feb  6 13:51:46.754: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101 container client-container: 
STEP: delete the pod
Feb  6 13:51:46.913: INFO: Waiting for pod downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101 to disappear
Feb  6 13:51:46.923: INFO: Pod downwardapi-volume-c0a98399-9377-479f-bcbe-449a58b93101 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:51:46.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8525" for this suite.
Feb  6 13:51:52.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:51:53.106: INFO: namespace downward-api-8525 deletion completed in 6.178079072s

• [SLOW TEST:16.677 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:51:53.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6995.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.173.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.173.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.173.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.173.188_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6995.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6995.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6995.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.173.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.173.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.173.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.173.188_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 13:52:07.684: INFO: Unable to read wheezy_udp@dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.692: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.697: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.706: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.714: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.718: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.725: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.729: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.735: INFO: Unable to read 10.111.173.188_udp@PTR from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.740: INFO: Unable to read 10.111.173.188_tcp@PTR from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.745: INFO: Unable to read jessie_udp@dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.750: INFO: Unable to read jessie_tcp@dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.755: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.766: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.770: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.779: INFO: Unable to read jessie_udp@PodARecord from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.790: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.797: INFO: Unable to read 10.111.173.188_udp@PTR from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.801: INFO: Unable to read 10.111.173.188_tcp@PTR from pod dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f: the server could not find the requested resource (get pods dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f)
Feb  6 13:52:07.801: INFO: Lookups using dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f failed for: [wheezy_udp@dns-test-service.dns-6995.svc.cluster.local wheezy_tcp@dns-test-service.dns-6995.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.111.173.188_udp@PTR 10.111.173.188_tcp@PTR jessie_udp@dns-test-service.dns-6995.svc.cluster.local jessie_tcp@dns-test-service.dns-6995.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6995.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6995.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6995.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.111.173.188_udp@PTR 10.111.173.188_tcp@PTR]

Feb  6 13:52:12.986: INFO: DNS probes using dns-6995/dns-test-180df188-cffe-49d8-bc54-f01c2ba34f1f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:52:13.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6995" for this suite.
Feb  6 13:52:19.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:52:19.491: INFO: namespace dns-6995 deletion completed in 6.182113523s

• [SLOW TEST:26.385 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:52:19.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-abcaee2f-b611-4d73-92d2-deb10b56b10f
STEP: Creating a pod to test consume secrets
Feb  6 13:52:19.676: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6" in namespace "projected-8891" to be "success or failure"
Feb  6 13:52:19.694: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.697957ms
Feb  6 13:52:21.709: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032938534s
Feb  6 13:52:23.718: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041864212s
Feb  6 13:52:25.732: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055378273s
Feb  6 13:52:27.746: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069575045s
Feb  6 13:52:29.762: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085282239s
STEP: Saw pod success
Feb  6 13:52:29.762: INFO: Pod "pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6" satisfied condition "success or failure"
Feb  6 13:52:29.768: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6 container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 13:52:29.890: INFO: Waiting for pod pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6 to disappear
Feb  6 13:52:29.896: INFO: Pod pod-projected-secrets-7c838e64-8abe-4eaa-829d-6792153c5dd6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:52:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8891" for this suite.
Feb  6 13:52:35.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:52:36.046: INFO: namespace projected-8891 deletion completed in 6.144624934s

• [SLOW TEST:16.555 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:52:36.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 13:52:36.112: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd" in namespace "projected-5377" to be "success or failure"
Feb  6 13:52:36.196: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Pending", Reason="", readiness=false. Elapsed: 84.1192ms
Feb  6 13:52:38.205: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092833854s
Feb  6 13:52:40.244: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131655499s
Feb  6 13:52:42.249: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136792651s
Feb  6 13:52:44.257: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145067464s
Feb  6 13:52:46.264: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.151462095s
STEP: Saw pod success
Feb  6 13:52:46.264: INFO: Pod "downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd" satisfied condition "success or failure"
Feb  6 13:52:46.268: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd container client-container: 
STEP: delete the pod
Feb  6 13:52:46.322: INFO: Waiting for pod downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd to disappear
Feb  6 13:52:46.405: INFO: Pod downwardapi-volume-9a5d3d10-e3f0-45d2-8e65-4ecccf8687cd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:52:46.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5377" for this suite.
Feb  6 13:52:52.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:52:52.547: INFO: namespace projected-5377 deletion completed in 6.135035167s

• [SLOW TEST:16.501 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:52:52.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  6 13:52:52.665: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  6 13:52:52.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:53.046: INFO: stderr: ""
Feb  6 13:52:53.046: INFO: stdout: "service/redis-slave created\n"
Feb  6 13:52:53.046: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  6 13:52:53.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:53.521: INFO: stderr: ""
Feb  6 13:52:53.521: INFO: stdout: "service/redis-master created\n"
Feb  6 13:52:53.522: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  6 13:52:53.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:54.027: INFO: stderr: ""
Feb  6 13:52:54.027: INFO: stdout: "service/frontend created\n"
Feb  6 13:52:54.028: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  6 13:52:54.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:54.302: INFO: stderr: ""
Feb  6 13:52:54.302: INFO: stdout: "deployment.apps/frontend created\n"
Feb  6 13:52:54.302: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  6 13:52:54.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:54.736: INFO: stderr: ""
Feb  6 13:52:54.736: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  6 13:52:54.737: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  6 13:52:54.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9333'
Feb  6 13:52:56.216: INFO: stderr: ""
Feb  6 13:52:56.216: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  6 13:52:56.216: INFO: Waiting for all frontend pods to be Running.
Feb  6 13:53:21.268: INFO: Waiting for frontend to serve content.
Feb  6 13:53:21.434: INFO: Trying to add a new entry to the guestbook.
Feb  6 13:53:21.473: INFO: Verifying that added entry can be retrieved.
Feb  6 13:53:21.514: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb  6 13:53:26.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:26.756: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:26.756: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 13:53:26.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:27.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:27.059: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 13:53:27.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:27.288: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:27.288: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 13:53:27.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:27.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:27.375: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 13:53:27.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:27.473: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:27.473: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  6 13:53:27.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9333'
Feb  6 13:53:27.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 13:53:27.578: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:53:27.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9333" for this suite.
Feb  6 13:54:19.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:54:19.818: INFO: namespace kubectl-9333 deletion completed in 52.23385299s

• [SLOW TEST:87.269 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:54:19.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  6 13:54:32.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-979b7a4a-c852-4958-8311-31e290042098 -c busybox-main-container --namespace=emptydir-7188 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  6 13:54:32.642: INFO: stderr: "I0206 13:54:32.265355    1950 log.go:172] (0xc00090a2c0) (0xc0007d6820) Create stream\nI0206 13:54:32.265523    1950 log.go:172] (0xc00090a2c0) (0xc0007d6820) Stream added, broadcasting: 1\nI0206 13:54:32.270487    1950 log.go:172] (0xc00090a2c0) Reply frame received for 1\nI0206 13:54:32.270528    1950 log.go:172] (0xc00090a2c0) (0xc0007b2000) Create stream\nI0206 13:54:32.270539    1950 log.go:172] (0xc00090a2c0) (0xc0007b2000) Stream added, broadcasting: 3\nI0206 13:54:32.273103    1950 log.go:172] (0xc00090a2c0) Reply frame received for 3\nI0206 13:54:32.273124    1950 log.go:172] (0xc00090a2c0) (0xc0007d68c0) Create stream\nI0206 13:54:32.273134    1950 log.go:172] (0xc00090a2c0) (0xc0007d68c0) Stream added, broadcasting: 5\nI0206 13:54:32.275474    1950 log.go:172] (0xc00090a2c0) Reply frame received for 5\nI0206 13:54:32.380324    1950 log.go:172] (0xc00090a2c0) Data frame received for 3\nI0206 13:54:32.380448    1950 log.go:172] (0xc0007b2000) (3) Data frame handling\nI0206 13:54:32.380465    1950 log.go:172] (0xc0007b2000) (3) Data frame sent\nI0206 13:54:32.627837    1950 log.go:172] (0xc00090a2c0) (0xc0007b2000) Stream removed, broadcasting: 3\nI0206 13:54:32.628565    1950 log.go:172] (0xc00090a2c0) (0xc0007d68c0) Stream removed, broadcasting: 5\nI0206 13:54:32.628704    1950 log.go:172] (0xc00090a2c0) Data frame received for 1\nI0206 13:54:32.628767    1950 log.go:172] (0xc0007d6820) (1) Data frame handling\nI0206 13:54:32.628859    1950 log.go:172] (0xc0007d6820) (1) Data frame sent\nI0206 13:54:32.628880    1950 log.go:172] (0xc00090a2c0) (0xc0007d6820) Stream removed, broadcasting: 1\nI0206 13:54:32.629152    1950 log.go:172] (0xc00090a2c0) Go away received\nI0206 13:54:32.629859    1950 log.go:172] (0xc00090a2c0) (0xc0007d6820) Stream removed, broadcasting: 1\nI0206 13:54:32.629885    1950 log.go:172] (0xc00090a2c0) (0xc0007b2000) Stream removed, broadcasting: 3\nI0206 13:54:32.629898    1950 log.go:172] (0xc00090a2c0) (0xc0007d68c0) Stream removed, broadcasting: 5\n"
Feb  6 13:54:32.642: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:54:32.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7188" for this suite.
Feb  6 13:54:38.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:54:38.856: INFO: namespace emptydir-7188 deletion completed in 6.191971931s

• [SLOW TEST:19.038 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:54:38.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4217
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4217
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4217
Feb  6 13:54:39.079: INFO: Found 0 stateful pods, waiting for 1
Feb  6 13:54:49.091: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  6 13:54:49.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:54:49.684: INFO: stderr: "I0206 13:54:49.284279    1969 log.go:172] (0xc0009e4420) (0xc000672960) Create stream\nI0206 13:54:49.284472    1969 log.go:172] (0xc0009e4420) (0xc000672960) Stream added, broadcasting: 1\nI0206 13:54:49.290099    1969 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0206 13:54:49.290121    1969 log.go:172] (0xc0009e4420) (0xc000672a00) Create stream\nI0206 13:54:49.290125    1969 log.go:172] (0xc0009e4420) (0xc000672a00) Stream added, broadcasting: 3\nI0206 13:54:49.292311    1969 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0206 13:54:49.292326    1969 log.go:172] (0xc0009e4420) (0xc00049dae0) Create stream\nI0206 13:54:49.292348    1969 log.go:172] (0xc0009e4420) (0xc00049dae0) Stream added, broadcasting: 5\nI0206 13:54:49.294046    1969 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0206 13:54:49.441120    1969 log.go:172] (0xc0009e4420) Data frame received for 5\nI0206 13:54:49.441160    1969 log.go:172] (0xc00049dae0) (5) Data frame handling\nI0206 13:54:49.441177    1969 log.go:172] (0xc00049dae0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:54:49.482943    1969 log.go:172] (0xc0009e4420) Data frame received for 3\nI0206 13:54:49.482995    1969 log.go:172] (0xc000672a00) (3) Data frame handling\nI0206 13:54:49.483008    1969 log.go:172] (0xc000672a00) (3) Data frame sent\nI0206 13:54:49.678348    1969 log.go:172] (0xc0009e4420) (0xc000672a00) Stream removed, broadcasting: 3\nI0206 13:54:49.678452    1969 log.go:172] (0xc0009e4420) Data frame received for 1\nI0206 13:54:49.678471    1969 log.go:172] (0xc0009e4420) (0xc00049dae0) Stream removed, broadcasting: 5\nI0206 13:54:49.678484    1969 log.go:172] (0xc000672960) (1) Data frame handling\nI0206 13:54:49.678499    1969 log.go:172] (0xc000672960) (1) Data frame sent\nI0206 13:54:49.678510    1969 log.go:172] (0xc0009e4420) (0xc000672960) Stream removed, broadcasting: 1\nI0206 13:54:49.678625    1969 log.go:172] (0xc0009e4420) Go away received\nI0206 13:54:49.678949    1969 log.go:172] (0xc0009e4420) (0xc000672960) Stream removed, broadcasting: 1\nI0206 13:54:49.678966    1969 log.go:172] (0xc0009e4420) (0xc000672a00) Stream removed, broadcasting: 3\nI0206 13:54:49.678975    1969 log.go:172] (0xc0009e4420) (0xc00049dae0) Stream removed, broadcasting: 5\n"
Feb  6 13:54:49.684: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:54:49.684: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:54:49.690: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:54:49.690: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:54:49.712: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999779s
Feb  6 13:54:50.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988924864s
Feb  6 13:54:51.731: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.980461095s
Feb  6 13:54:52.740: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.970152293s
Feb  6 13:54:53.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.961275899s
Feb  6 13:54:54.807: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.899252793s
Feb  6 13:54:55.816: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.894299207s
Feb  6 13:54:56.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.885139125s
Feb  6 13:54:57.835: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.874352809s
Feb  6 13:54:58.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 866.411947ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4217
Feb  6 13:54:59.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:55:00.418: INFO: stderr: "I0206 13:55:00.102517    1989 log.go:172] (0xc0007da580) (0xc0005beb40) Create stream\nI0206 13:55:00.102752    1989 log.go:172] (0xc0007da580) (0xc0005beb40) Stream added, broadcasting: 1\nI0206 13:55:00.110076    1989 log.go:172] (0xc0007da580) Reply frame received for 1\nI0206 13:55:00.110217    1989 log.go:172] (0xc0007da580) (0xc0007c2000) Create stream\nI0206 13:55:00.110279    1989 log.go:172] (0xc0007da580) (0xc0007c2000) Stream added, broadcasting: 3\nI0206 13:55:00.112838    1989 log.go:172] (0xc0007da580) Reply frame received for 3\nI0206 13:55:00.112881    1989 log.go:172] (0xc0007da580) (0xc00085c000) Create stream\nI0206 13:55:00.112904    1989 log.go:172] (0xc0007da580) (0xc00085c000) Stream added, broadcasting: 5\nI0206 13:55:00.114179    1989 log.go:172] (0xc0007da580) Reply frame received for 5\nI0206 13:55:00.270388    1989 log.go:172] (0xc0007da580) Data frame received for 3\nI0206 13:55:00.270513    1989 log.go:172] (0xc0007c2000) (3) Data frame handling\nI0206 13:55:00.270580    1989 log.go:172] (0xc0007c2000) (3) Data frame sent\nI0206 13:55:00.270626    1989 log.go:172] (0xc0007da580) Data frame received for 5\nI0206 13:55:00.270648    1989 log.go:172] (0xc00085c000) (5) Data frame handling\nI0206 13:55:00.270673    1989 log.go:172] (0xc00085c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:55:00.406746    1989 log.go:172] (0xc0007da580) Data frame received for 1\nI0206 13:55:00.406793    1989 log.go:172] (0xc0007da580) (0xc0007c2000) Stream removed, broadcasting: 3\nI0206 13:55:00.406835    1989 log.go:172] (0xc0005beb40) (1) Data frame handling\nI0206 13:55:00.406858    1989 log.go:172] (0xc0005beb40) (1) Data frame sent\nI0206 13:55:00.407005    1989 log.go:172] (0xc0007da580) (0xc00085c000) Stream removed, broadcasting: 5\nI0206 13:55:00.407170    1989 log.go:172] (0xc0007da580) (0xc0005beb40) Stream removed, broadcasting: 1\nI0206 13:55:00.407249    1989 log.go:172] (0xc0007da580) Go away received\nI0206 13:55:00.407878    1989 log.go:172] (0xc0007da580) (0xc0005beb40) Stream removed, broadcasting: 1\nI0206 13:55:00.407896    1989 log.go:172] (0xc0007da580) (0xc0007c2000) Stream removed, broadcasting: 3\nI0206 13:55:00.407907    1989 log.go:172] (0xc0007da580) (0xc00085c000) Stream removed, broadcasting: 5\n"
Feb  6 13:55:00.418: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:55:00.418: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:55:00.425: INFO: Found 1 stateful pods, waiting for 3
Feb  6 13:55:10.441: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:55:10.441: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:55:10.441: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 13:55:20.435: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:55:20.435: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 13:55:20.435: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  6 13:55:20.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:55:20.935: INFO: stderr: "I0206 13:55:20.667805    2011 log.go:172] (0xc000972b00) (0xc000a4e8c0) Create stream\nI0206 13:55:20.667875    2011 log.go:172] (0xc000972b00) (0xc000a4e8c0) Stream added, broadcasting: 1\nI0206 13:55:20.678656    2011 log.go:172] (0xc000972b00) Reply frame received for 1\nI0206 13:55:20.678732    2011 log.go:172] (0xc000972b00) (0xc000a4e000) Create stream\nI0206 13:55:20.678750    2011 log.go:172] (0xc000972b00) (0xc000a4e000) Stream added, broadcasting: 3\nI0206 13:55:20.681533    2011 log.go:172] (0xc000972b00) Reply frame received for 3\nI0206 13:55:20.681568    2011 log.go:172] (0xc000972b00) (0xc000634320) Create stream\nI0206 13:55:20.681578    2011 log.go:172] (0xc000972b00) (0xc000634320) Stream added, broadcasting: 5\nI0206 13:55:20.682894    2011 log.go:172] (0xc000972b00) Reply frame received for 5\nI0206 13:55:20.784470    2011 log.go:172] (0xc000972b00) Data frame received for 5\nI0206 13:55:20.784540    2011 log.go:172] (0xc000634320) (5) Data frame handling\nI0206 13:55:20.784601    2011 log.go:172] (0xc000634320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:55:20.788280    2011 log.go:172] (0xc000972b00) Data frame received for 3\nI0206 13:55:20.788306    2011 log.go:172] (0xc000a4e000) (3) Data frame handling\nI0206 13:55:20.788326    2011 log.go:172] (0xc000a4e000) (3) Data frame sent\nI0206 13:55:20.925453    2011 log.go:172] (0xc000972b00) (0xc000a4e000) Stream removed, broadcasting: 3\nI0206 13:55:20.925774    2011 log.go:172] (0xc000972b00) Data frame received for 1\nI0206 13:55:20.925883    2011 log.go:172] (0xc000a4e8c0) (1) Data frame handling\nI0206 13:55:20.925961    2011 log.go:172] (0xc000a4e8c0) (1) Data frame sent\nI0206 13:55:20.926031    2011 log.go:172] (0xc000972b00) (0xc000634320) Stream removed, broadcasting: 5\nI0206 13:55:20.926144    2011 log.go:172] (0xc000972b00) (0xc000a4e8c0) Stream removed, broadcasting: 1\nI0206 13:55:20.926221    2011 log.go:172] (0xc000972b00) Go away received\nI0206 13:55:20.927116    2011 log.go:172] (0xc000972b00) (0xc000a4e8c0) Stream removed, broadcasting: 1\nI0206 13:55:20.927144    2011 log.go:172] (0xc000972b00) (0xc000a4e000) Stream removed, broadcasting: 3\nI0206 13:55:20.927157    2011 log.go:172] (0xc000972b00) (0xc000634320) Stream removed, broadcasting: 5\n"
Feb  6 13:55:20.935: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:55:20.935: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:55:20.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:55:21.587: INFO: stderr: "I0206 13:55:21.151770    2033 log.go:172] (0xc0009ae420) (0xc00044a820) Create stream\nI0206 13:55:21.151909    2033 log.go:172] (0xc0009ae420) (0xc00044a820) Stream added, broadcasting: 1\nI0206 13:55:21.161933    2033 log.go:172] (0xc0009ae420) Reply frame received for 1\nI0206 13:55:21.161987    2033 log.go:172] (0xc0009ae420) (0xc0006cc280) Create stream\nI0206 13:55:21.162000    2033 log.go:172] (0xc0009ae420) (0xc0006cc280) Stream added, broadcasting: 3\nI0206 13:55:21.162931    2033 log.go:172] (0xc0009ae420) Reply frame received for 3\nI0206 13:55:21.162954    2033 log.go:172] (0xc0009ae420) (0xc0006cc320) Create stream\nI0206 13:55:21.162961    2033 log.go:172] (0xc0009ae420) (0xc0006cc320) Stream added, broadcasting: 5\nI0206 13:55:21.164195    2033 log.go:172] (0xc0009ae420) Reply frame received for 5\nI0206 13:55:21.334533    2033 log.go:172] (0xc0009ae420) Data frame received for 5\nI0206 13:55:21.334600    2033 log.go:172] (0xc0006cc320) (5) Data frame handling\nI0206 13:55:21.334628    2033 log.go:172] (0xc0006cc320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:55:21.437915    2033 log.go:172] (0xc0009ae420) Data frame received for 3\nI0206 13:55:21.437993    2033 log.go:172] (0xc0006cc280) (3) Data frame handling\nI0206 13:55:21.438039    2033 log.go:172] (0xc0006cc280) (3) Data frame sent\nI0206 13:55:21.580250    2033 log.go:172] (0xc0009ae420) Data frame received for 1\nI0206 13:55:21.580389    2033 log.go:172] (0xc0009ae420) (0xc0006cc320) Stream removed, broadcasting: 5\nI0206 13:55:21.580461    2033 log.go:172] (0xc00044a820) (1) Data frame handling\nI0206 13:55:21.580514    2033 log.go:172] (0xc00044a820) (1) Data frame sent\nI0206 13:55:21.580594    2033 log.go:172] (0xc0009ae420) (0xc0006cc280) Stream removed, broadcasting: 3\nI0206 13:55:21.580793    2033 log.go:172] (0xc0009ae420) (0xc00044a820) Stream removed, broadcasting: 1\nI0206 13:55:21.580909    2033 log.go:172] (0xc0009ae420) Go away received\nI0206 13:55:21.581465    2033 log.go:172] (0xc0009ae420) (0xc00044a820) Stream removed, broadcasting: 1\nI0206 13:55:21.581577    2033 log.go:172] (0xc0009ae420) (0xc0006cc280) Stream removed, broadcasting: 3\nI0206 13:55:21.581634    2033 log.go:172] (0xc0009ae420) (0xc0006cc320) Stream removed, broadcasting: 5\n"
Feb  6 13:55:21.588: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:55:21.588: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:55:21.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 13:55:22.336: INFO: stderr: "I0206 13:55:21.809549    2054 log.go:172] (0xc000a24420) (0xc00062e960) Create stream\nI0206 13:55:21.809885    2054 log.go:172] (0xc000a24420) (0xc00062e960) Stream added, broadcasting: 1\nI0206 13:55:21.839243    2054 log.go:172] (0xc000a24420) Reply frame received for 1\nI0206 13:55:21.839386    2054 log.go:172] (0xc000a24420) (0xc00062ea00) Create stream\nI0206 13:55:21.839394    2054 log.go:172] (0xc000a24420) (0xc00062ea00) Stream added, broadcasting: 3\nI0206 13:55:21.842713    2054 log.go:172] (0xc000a24420) Reply frame received for 3\nI0206 13:55:21.842830    2054 log.go:172] (0xc000a24420) (0xc00053e000) Create stream\nI0206 13:55:21.842906    2054 log.go:172] (0xc000a24420) (0xc00053e000) Stream added, broadcasting: 5\nI0206 13:55:21.845509    2054 log.go:172] (0xc000a24420) Reply frame received for 5\nI0206 13:55:22.083150    2054 log.go:172] (0xc000a24420) Data frame received for 5\nI0206 13:55:22.083328    2054 log.go:172] (0xc00053e000) (5) Data frame handling\nI0206 13:55:22.083370    2054 log.go:172] (0xc00053e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 13:55:22.124232    2054 log.go:172] (0xc000a24420) Data frame received for 3\nI0206 13:55:22.124333    2054 log.go:172] (0xc00062ea00) (3) Data frame handling\nI0206 13:55:22.124381    2054 log.go:172] (0xc00062ea00) (3) Data frame sent\nI0206 13:55:22.313872    2054 log.go:172] (0xc000a24420) Data frame received for 1\nI0206 13:55:22.314292    2054 log.go:172] (0xc000a24420) (0xc00062ea00) Stream removed, broadcasting: 3\nI0206 13:55:22.314534    2054 log.go:172] (0xc000a24420) (0xc00053e000) Stream removed, broadcasting: 5\nI0206 13:55:22.314611    2054 log.go:172] (0xc00062e960) (1) Data frame handling\nI0206 13:55:22.314643    2054 log.go:172] (0xc00062e960) (1) Data frame sent\nI0206 13:55:22.314685    2054 log.go:172] (0xc000a24420) (0xc00062e960) Stream removed, broadcasting: 1\nI0206 13:55:22.314773    2054 log.go:172] (0xc000a24420) Go away received\nI0206 13:55:22.315958    2054 log.go:172] (0xc000a24420) (0xc00062e960) Stream removed, broadcasting: 1\nI0206 13:55:22.316086    2054 log.go:172] (0xc000a24420) (0xc00062ea00) Stream removed, broadcasting: 3\nI0206 13:55:22.316195    2054 log.go:172] (0xc000a24420) (0xc00053e000) Stream removed, broadcasting: 5\n"
Feb  6 13:55:22.336: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 13:55:22.336: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 13:55:22.336: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:55:22.358: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:55:22.358: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:55:22.358: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  6 13:55:22.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999579s
Feb  6 13:55:23.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989023969s
Feb  6 13:55:24.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979701202s
Feb  6 13:55:25.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969853693s
Feb  6 13:55:26.416: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.960695433s
Feb  6 13:55:27.423: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.952292068s
Feb  6 13:55:28.431: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945437073s
Feb  6 13:55:29.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937427365s
Feb  6 13:55:30.452: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.925717196s
Feb  6 13:55:31.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 916.082664ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4217
Feb  6 13:55:32.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:55:33.068: INFO: stderr: "I0206 13:55:32.722993    2074 log.go:172] (0xc0008e40b0) (0xc000976640) Create stream\nI0206 13:55:32.723207    2074 log.go:172] (0xc0008e40b0) (0xc000976640) Stream added, broadcasting: 1\nI0206 13:55:32.728031    2074 log.go:172] (0xc0008e40b0) Reply frame received for 1\nI0206 13:55:32.728070    2074 log.go:172] (0xc0008e40b0) (0xc0009de000) Create stream\nI0206 13:55:32.728079    2074 log.go:172] (0xc0008e40b0) (0xc0009de000) Stream added, broadcasting: 3\nI0206 13:55:32.730120    2074 log.go:172] (0xc0008e40b0) Reply frame received for 3\nI0206 13:55:32.730161    2074 log.go:172] (0xc0008e40b0) (0xc0005c63c0) Create stream\nI0206 13:55:32.730178    2074 log.go:172] (0xc0008e40b0) (0xc0005c63c0) Stream added, broadcasting: 5\nI0206 13:55:32.732695    2074 log.go:172] (0xc0008e40b0) Reply frame received for 5\nI0206 13:55:32.924337    2074 log.go:172] (0xc0008e40b0) Data frame received for 5\nI0206 13:55:32.924433    2074 log.go:172] (0xc0005c63c0) (5) Data frame handling\nI0206 13:55:32.924448    2074 log.go:172] (0xc0005c63c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:55:32.924458    2074 log.go:172] (0xc0008e40b0) Data frame received for 3\nI0206 13:55:32.924462    2074 log.go:172] (0xc0009de000) (3) Data frame handling\nI0206 13:55:32.924466    2074 log.go:172] (0xc0009de000) (3) Data frame sent\nI0206 13:55:33.062951    2074 log.go:172] (0xc0008e40b0) (0xc0005c63c0) Stream removed, broadcasting: 5\nI0206 13:55:33.063118    2074 log.go:172] (0xc0008e40b0) Data frame received for 1\nI0206 13:55:33.063158    2074 log.go:172] (0xc0008e40b0) (0xc0009de000) Stream removed, broadcasting: 3\nI0206 13:55:33.063194    2074 log.go:172] (0xc000976640) (1) Data frame handling\nI0206 13:55:33.063211    2074 log.go:172] (0xc000976640) (1) Data frame sent\nI0206 13:55:33.063226    2074 log.go:172] (0xc0008e40b0) (0xc000976640) Stream removed, broadcasting: 1\nI0206 13:55:33.063240    2074 log.go:172] (0xc0008e40b0) Go away received\nI0206 13:55:33.063519    2074 log.go:172] (0xc0008e40b0) (0xc000976640) Stream removed, broadcasting: 1\nI0206 13:55:33.063532    2074 log.go:172] (0xc0008e40b0) (0xc0009de000) Stream removed, broadcasting: 3\nI0206 13:55:33.063539    2074 log.go:172] (0xc0008e40b0) (0xc0005c63c0) Stream removed, broadcasting: 5\n"
Feb  6 13:55:33.068: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:55:33.068: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:55:33.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:55:33.378: INFO: stderr: "I0206 13:55:33.226541    2094 log.go:172] (0xc00094c160) (0xc00092c6e0) Create stream\nI0206 13:55:33.226693    2094 log.go:172] (0xc00094c160) (0xc00092c6e0) Stream added, broadcasting: 1\nI0206 13:55:33.229506    2094 log.go:172] (0xc00094c160) Reply frame received for 1\nI0206 13:55:33.229534    2094 log.go:172] (0xc00094c160) (0xc00059c1e0) Create stream\nI0206 13:55:33.229543    2094 log.go:172] (0xc00094c160) (0xc00059c1e0) Stream added, broadcasting: 3\nI0206 13:55:33.230596    2094 log.go:172] (0xc00094c160) Reply frame received for 3\nI0206 13:55:33.230620    2094 log.go:172] (0xc00094c160) (0xc00059c280) Create stream\nI0206 13:55:33.230628    2094 log.go:172] (0xc00094c160) (0xc00059c280) Stream added, broadcasting: 5\nI0206 13:55:33.232487    2094 log.go:172] (0xc00094c160) Reply frame received for 5\nI0206 13:55:33.304793    2094 log.go:172] (0xc00094c160) Data frame received for 3\nI0206 13:55:33.304858    2094 log.go:172] (0xc00094c160) Data frame received for 5\nI0206 13:55:33.304871    2094 log.go:172] (0xc00059c280) (5) Data frame handling\nI0206 13:55:33.304879    2094 log.go:172] (0xc00059c280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:55:33.304891    2094 log.go:172] (0xc00059c1e0) (3) Data frame handling\nI0206 13:55:33.304897    2094 log.go:172] (0xc00059c1e0) (3) Data frame sent\nI0206 13:55:33.369576    2094 log.go:172] (0xc00094c160) (0xc00059c1e0) Stream removed, broadcasting: 3\nI0206 13:55:33.369646    2094 log.go:172] (0xc00094c160) Data frame received for 1\nI0206 13:55:33.369833    2094 log.go:172] (0xc00094c160) (0xc00059c280) Stream removed, broadcasting: 5\nI0206 13:55:33.369899    2094 log.go:172] (0xc00092c6e0) (1) Data frame handling\nI0206 13:55:33.369953    2094 log.go:172] (0xc00092c6e0) (1) Data frame sent\nI0206 13:55:33.370048    2094 log.go:172] (0xc00094c160) (0xc00092c6e0) Stream removed, broadcasting: 1\nI0206 13:55:33.370151    2094 log.go:172] (0xc00094c160) Go away received\nI0206 13:55:33.370420    2094 log.go:172] (0xc00094c160) (0xc00092c6e0) Stream removed, broadcasting: 1\nI0206 13:55:33.370439    2094 log.go:172] (0xc00094c160) (0xc00059c1e0) Stream removed, broadcasting: 3\nI0206 13:55:33.370451    2094 log.go:172] (0xc00094c160) (0xc00059c280) Stream removed, broadcasting: 5\n"
Feb  6 13:55:33.378: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:55:33.378: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:55:33.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4217 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 13:55:34.141: INFO: stderr: "I0206 13:55:33.688047    2115 log.go:172] (0xc000116dc0) (0xc0008fa780) Create stream\nI0206 13:55:33.688426    2115 log.go:172] (0xc000116dc0) (0xc0008fa780) Stream added, broadcasting: 1\nI0206 13:55:33.697079    2115 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0206 13:55:33.697139    2115 log.go:172] (0xc000116dc0) (0xc000a56000) Create stream\nI0206 13:55:33.697159    2115 log.go:172] (0xc000116dc0) (0xc000a56000) Stream added, broadcasting: 3\nI0206 13:55:33.701063    2115 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0206 13:55:33.701226    2115 log.go:172] (0xc000116dc0) (0xc0008fa820) Create stream\nI0206 13:55:33.701270    2115 log.go:172] (0xc000116dc0) (0xc0008fa820) Stream added, broadcasting: 5\nI0206 13:55:33.710811    2115 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0206 13:55:33.901048    2115 log.go:172] (0xc000116dc0) Data frame received for 3\nI0206 13:55:33.901145    2115 log.go:172] (0xc000a56000) (3) Data frame handling\nI0206 13:55:33.901165    2115 log.go:172] (0xc000a56000) (3) Data frame sent\nI0206 13:55:33.901403    2115 log.go:172] (0xc000116dc0) Data frame received for 5\nI0206 13:55:33.901418    2115 log.go:172] (0xc0008fa820) (5) Data frame handling\nI0206 13:55:33.901446    2115 log.go:172] (0xc0008fa820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 13:55:34.132417    2115 log.go:172] (0xc000116dc0) (0xc000a56000) Stream removed, broadcasting: 3\nI0206 13:55:34.132591    2115 log.go:172] (0xc000116dc0) Data frame received for 1\nI0206 13:55:34.132655    2115 log.go:172] (0xc000116dc0) (0xc0008fa820) Stream removed, broadcasting: 5\nI0206 13:55:34.132682    2115 log.go:172] (0xc0008fa780) (1) Data frame handling\nI0206 13:55:34.132714    2115 log.go:172] (0xc0008fa780) (1) Data frame sent\nI0206 13:55:34.132723    2115 log.go:172] (0xc000116dc0) (0xc0008fa780) Stream removed, broadcasting: 1\nI0206 13:55:34.132737    2115 log.go:172] (0xc000116dc0) Go away received\nI0206 13:55:34.133517    2115 log.go:172] (0xc000116dc0) (0xc0008fa780) Stream removed, broadcasting: 1\nI0206 13:55:34.133539    2115 log.go:172] (0xc000116dc0) (0xc000a56000) Stream removed, broadcasting: 3\nI0206 13:55:34.133549    2115 log.go:172] (0xc000116dc0) (0xc0008fa820) Stream removed, broadcasting: 5\n"
Feb  6 13:55:34.141: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 13:55:34.141: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 13:55:34.141: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  6 13:56:15.287: INFO: Deleting all statefulset in ns statefulset-4217
Feb  6 13:56:15.294: INFO: Scaling statefulset ss to 0
Feb  6 13:56:15.312: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 13:56:15.316: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:56:15.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4217" for this suite.
Feb  6 13:56:23.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:56:23.497: INFO: namespace statefulset-4217 deletion completed in 8.152980595s

• [SLOW TEST:104.641 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:56:23.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:56:23.613: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  6 13:56:23.655: INFO: Number of nodes with available pods: 0
Feb  6 13:56:23.655: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:25.002: INFO: Number of nodes with available pods: 0
Feb  6 13:56:25.002: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:25.677: INFO: Number of nodes with available pods: 0
Feb  6 13:56:25.677: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:26.678: INFO: Number of nodes with available pods: 0
Feb  6 13:56:26.678: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:27.667: INFO: Number of nodes with available pods: 0
Feb  6 13:56:27.667: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:29.702: INFO: Number of nodes with available pods: 0
Feb  6 13:56:29.702: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:30.668: INFO: Number of nodes with available pods: 0
Feb  6 13:56:30.668: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:31.674: INFO: Number of nodes with available pods: 0
Feb  6 13:56:31.674: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:32.678: INFO: Number of nodes with available pods: 1
Feb  6 13:56:32.678: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:33.673: INFO: Number of nodes with available pods: 1
Feb  6 13:56:33.673: INFO: Node iruya-node is running more than one daemon pod
Feb  6 13:56:34.671: INFO: Number of nodes with available pods: 2
Feb  6 13:56:34.671: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  6 13:56:34.705: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:34.705: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:35.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:35.719: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:36.723: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:36.723: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:37.720: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:37.720: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:38.739: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:38.739: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:39.718: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:39.718: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:40.721: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:40.721: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:41.721: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:41.721: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:41.721: INFO: Pod daemon-set-zhksd is not available
Feb  6 13:56:42.718: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:42.718: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:42.718: INFO: Pod daemon-set-zhksd is not available
Feb  6 13:56:43.721: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:43.721: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:43.721: INFO: Pod daemon-set-zhksd is not available
Feb  6 13:56:44.724: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:44.724: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:44.724: INFO: Pod daemon-set-zhksd is not available
Feb  6 13:56:45.720: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:45.721: INFO: Wrong image for pod: daemon-set-zhksd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:45.721: INFO: Pod daemon-set-zhksd is not available
Feb  6 13:56:46.757: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:46.758: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:47.723: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:47.723: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:48.719: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:48.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:49.731: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:49.731: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:50.718: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:50.718: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:51.720: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:51.720: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:52.717: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:52.717: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:53.722: INFO: Pod daemon-set-pl7zp is not available
Feb  6 13:56:53.722: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:54.725: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:55.721: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:56.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:57.724: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:58.721: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:58.721: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:56:59.722: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:56:59.722: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:00.720: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:00.720: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:01.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:01.719: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:02.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:02.719: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:03.728: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:03.728: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:04.723: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:04.723: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:05.724: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:05.724: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:06.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:06.719: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:07.719: INFO: Wrong image for pod: daemon-set-shcb4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  6 13:57:07.719: INFO: Pod daemon-set-shcb4 is not available
Feb  6 13:57:09.916: INFO: Pod daemon-set-zxm9v is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  6 13:57:09.948: INFO: Number of nodes with available pods: 1
Feb  6 13:57:09.948: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:12.224: INFO: Number of nodes with available pods: 1
Feb  6 13:57:12.224: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:12.964: INFO: Number of nodes with available pods: 1
Feb  6 13:57:12.964: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:13.966: INFO: Number of nodes with available pods: 1
Feb  6 13:57:13.966: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:15.663: INFO: Number of nodes with available pods: 1
Feb  6 13:57:15.663: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:15.966: INFO: Number of nodes with available pods: 1
Feb  6 13:57:15.966: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:16.973: INFO: Number of nodes with available pods: 1
Feb  6 13:57:16.973: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 13:57:18.014: INFO: Number of nodes with available pods: 2
Feb  6 13:57:18.014: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5857, will wait for the garbage collector to delete the pods
Feb  6 13:57:18.098: INFO: Deleting DaemonSet.extensions daemon-set took: 10.758356ms
Feb  6 13:57:18.399: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.479699ms
Feb  6 13:57:27.938: INFO: Number of nodes with available pods: 0
Feb  6 13:57:27.938: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 13:57:27.944: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5857/daemonsets","resourceVersion":"23324310"},"items":null}

Feb  6 13:57:27.949: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5857/pods","resourceVersion":"23324310"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:57:27.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5857" for this suite.
Feb  6 13:57:34.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:57:34.112: INFO: namespace daemonsets-5857 deletion completed in 6.121351143s

• [SLOW TEST:70.614 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:57:34.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a173476b-eaf8-45ba-96a2-c2c718abe4ee
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a173476b-eaf8-45ba-96a2-c2c718abe4ee
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:58:51.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2812" for this suite.
Feb  6 13:59:13.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:59:13.967: INFO: namespace configmap-2812 deletion completed in 22.144025751s

• [SLOW TEST:99.855 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:59:13.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 13:59:14.073: INFO: Creating ReplicaSet my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299
Feb  6 13:59:14.118: INFO: Pod name my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299: Found 0 pods out of 1
Feb  6 13:59:19.242: INFO: Pod name my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299: Found 1 pods out of 1
Feb  6 13:59:19.242: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299" is running
Feb  6 13:59:23.254: INFO: Pod "my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299-f889z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 13:59:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 13:59:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 13:59:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 13:59:14 +0000 UTC Reason: Message:}])
Feb  6 13:59:23.255: INFO: Trying to dial the pod
Feb  6 13:59:28.300: INFO: Controller my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299: Got expected result from replica 1 [my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299-f889z]: "my-hostname-basic-e8f7a52a-917c-4f78-8081-c3f094eda299-f889z", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:59:28.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9552" for this suite.
Feb  6 13:59:34.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:59:34.427: INFO: namespace replicaset-9552 deletion completed in 6.115514717s

• [SLOW TEST:20.459 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:59:34.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-5a23adfa-df50-4879-81d8-0ca9a876ca52
STEP: Creating a pod to test consume secrets
Feb  6 13:59:34.544: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8" in namespace "projected-1423" to be "success or failure"
Feb  6 13:59:34.554: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.448878ms
Feb  6 13:59:36.568: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023807178s
Feb  6 13:59:38.576: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03235839s
Feb  6 13:59:40.589: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044551175s
Feb  6 13:59:42.601: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057035473s
Feb  6 13:59:44.611: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.066468651s
Feb  6 13:59:46.628: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.083822345s
STEP: Saw pod success
Feb  6 13:59:46.628: INFO: Pod "pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8" satisfied condition "success or failure"
Feb  6 13:59:46.635: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8 container secret-volume-test: 
STEP: delete the pod
Feb  6 13:59:46.735: INFO: Waiting for pod pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8 to disappear
Feb  6 13:59:46.749: INFO: Pod pod-projected-secrets-9f24fb10-41d2-4092-b074-a25978176bf8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:59:46.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1423" for this suite.
Feb  6 13:59:52.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 13:59:52.939: INFO: namespace projected-1423 deletion completed in 6.182133375s

• [SLOW TEST:18.512 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 13:59:52.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 13:59:53.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9330" for this suite.
Feb  6 14:00:15.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:00:15.495: INFO: namespace pods-9330 deletion completed in 22.2335377s

• [SLOW TEST:22.556 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:00:15.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-671bec13-c0a5-4364-8448-91d4bf911ffc
STEP: Creating a pod to test consume configMaps
Feb  6 14:00:15.660: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5" in namespace "projected-5516" to be "success or failure"
Feb  6 14:00:15.668: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.358028ms
Feb  6 14:00:17.675: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015509776s
Feb  6 14:00:19.687: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027736973s
Feb  6 14:00:21.699: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039564264s
Feb  6 14:00:23.710: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049959354s
Feb  6 14:00:25.719: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059686906s
STEP: Saw pod success
Feb  6 14:00:25.719: INFO: Pod "pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5" satisfied condition "success or failure"
Feb  6 14:00:25.724: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 14:00:25.829: INFO: Waiting for pod pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5 to disappear
Feb  6 14:00:25.836: INFO: Pod pod-projected-configmaps-c14af3ea-dea3-49ac-aef6-fb82cde573b5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:00:25.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5516" for this suite.
Feb  6 14:00:31.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:00:31.983: INFO: namespace projected-5516 deletion completed in 6.139670122s

• [SLOW TEST:16.487 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:00:31.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:00:40.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6811" for this suite.
Feb  6 14:00:46.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:00:46.467: INFO: namespace emptydir-wrapper-6811 deletion completed in 6.3016086s

• [SLOW TEST:14.483 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:00:46.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:00:46.537: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d" in namespace "downward-api-2875" to be "success or failure"
Feb  6 14:00:46.611: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Pending", Reason="", readiness=false. Elapsed: 74.202797ms
Feb  6 14:00:48.627: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090495101s
Feb  6 14:00:50.642: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104971817s
Feb  6 14:00:52.654: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11707818s
Feb  6 14:00:54.673: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136436556s
Feb  6 14:00:56.687: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150299711s
STEP: Saw pod success
Feb  6 14:00:56.687: INFO: Pod "downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d" satisfied condition "success or failure"
Feb  6 14:00:56.692: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d container client-container: 
STEP: delete the pod
Feb  6 14:00:56.757: INFO: Waiting for pod downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d to disappear
Feb  6 14:00:56.787: INFO: Pod downwardapi-volume-dc021462-9f6c-4061-978b-4f32955a437d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:00:56.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2875" for this suite.
Feb  6 14:01:02.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:02.914: INFO: namespace downward-api-2875 deletion completed in 6.12105016s

• [SLOW TEST:16.447 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:02.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-7a65470d-9a7f-4bec-895a-fb181911c6c3
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:01:02.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-845" for this suite.
Feb  6 14:01:09.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:09.165: INFO: namespace configmap-845 deletion completed in 6.183947647s

• [SLOW TEST:6.251 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:09.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 1 pods
STEP: Gathering metrics
W0206 14:01:13.042384       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 14:01:13.042: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:01:13.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9356" for this suite.
Feb  6 14:01:19.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:19.164: INFO: namespace gc-9356 deletion completed in 6.118628971s

• [SLOW TEST:9.998 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:19.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  6 14:01:19.248: INFO: Waiting up to 5m0s for pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d" in namespace "emptydir-9668" to be "success or failure"
Feb  6 14:01:19.319: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d": Phase="Pending", Reason="", readiness=false. Elapsed: 71.031036ms
Feb  6 14:01:21.326: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0780379s
Feb  6 14:01:23.336: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087913084s
Feb  6 14:01:25.343: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09506832s
Feb  6 14:01:27.352: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.103765787s
STEP: Saw pod success
Feb  6 14:01:27.352: INFO: Pod "pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d" satisfied condition "success or failure"
Feb  6 14:01:27.355: INFO: Trying to get logs from node iruya-node pod pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d container test-container: 
STEP: delete the pod
Feb  6 14:01:27.439: INFO: Waiting for pod pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d to disappear
Feb  6 14:01:27.453: INFO: Pod pod-e90b7083-4ef7-4caa-a2a7-32f485a3384d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:01:27.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9668" for this suite.
Feb  6 14:01:33.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:33.670: INFO: namespace emptydir-9668 deletion completed in 6.204353551s

• [SLOW TEST:14.506 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:33.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  6 14:01:33.822: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix217513839/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:01:33.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-979" for this suite.
Feb  6 14:01:39.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:40.107: INFO: namespace kubectl-979 deletion completed in 6.137655806s

• [SLOW TEST:6.437 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:40.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:01:40.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc" in namespace "projected-1669" to be "success or failure"
Feb  6 14:01:40.268: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.859718ms
Feb  6 14:01:42.279: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044059983s
Feb  6 14:01:44.295: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059980304s
Feb  6 14:01:46.304: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069463424s
Feb  6 14:01:48.316: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081538841s
Feb  6 14:01:50.331: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095928562s
STEP: Saw pod success
Feb  6 14:01:50.331: INFO: Pod "downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc" satisfied condition "success or failure"
Feb  6 14:01:50.337: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc container client-container: 
STEP: delete the pod
Feb  6 14:01:51.041: INFO: Waiting for pod downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc to disappear
Feb  6 14:01:51.052: INFO: Pod downwardapi-volume-f422daa9-4033-47d2-b1be-dbbca2875dfc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:01:51.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1669" for this suite.
Feb  6 14:01:57.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:01:57.264: INFO: namespace projected-1669 deletion completed in 6.205285574s

• [SLOW TEST:17.157 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:01:57.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  6 14:01:57.353: INFO: Waiting up to 5m0s for pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a" in namespace "emptydir-2454" to be "success or failure"
Feb  6 14:01:57.442: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Pending", Reason="", readiness=false. Elapsed: 88.669281ms
Feb  6 14:01:59.453: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099227396s
Feb  6 14:02:01.460: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106967683s
Feb  6 14:02:03.471: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11733615s
Feb  6 14:02:05.479: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.125767501s
Feb  6 14:02:07.488: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.134479354s
STEP: Saw pod success
Feb  6 14:02:07.488: INFO: Pod "pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a" satisfied condition "success or failure"
Feb  6 14:02:07.493: INFO: Trying to get logs from node iruya-node pod pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a container test-container: 
STEP: delete the pod
Feb  6 14:02:07.618: INFO: Waiting for pod pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a to disappear
Feb  6 14:02:07.636: INFO: Pod pod-3fd7bd5a-cce9-4654-bcd7-0a2d6a47071a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:02:07.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2454" for this suite.
Feb  6 14:02:13.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:02:13.826: INFO: namespace emptydir-2454 deletion completed in 6.182016808s

• [SLOW TEST:16.561 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:02:13.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb  6 14:02:13.999: INFO: Waiting up to 5m0s for pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426" in namespace "var-expansion-917" to be "success or failure"
Feb  6 14:02:14.070: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Pending", Reason="", readiness=false. Elapsed: 71.602928ms
Feb  6 14:02:16.078: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078920504s
Feb  6 14:02:18.102: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103746838s
Feb  6 14:02:20.111: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1119435s
Feb  6 14:02:22.120: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Pending", Reason="", readiness=false. Elapsed: 8.121064889s
Feb  6 14:02:24.128: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12926105s
STEP: Saw pod success
Feb  6 14:02:24.128: INFO: Pod "var-expansion-acb414f8-6176-4d90-8495-1ca313816426" satisfied condition "success or failure"
Feb  6 14:02:24.134: INFO: Trying to get logs from node iruya-node pod var-expansion-acb414f8-6176-4d90-8495-1ca313816426 container dapi-container: 
STEP: delete the pod
Feb  6 14:02:24.270: INFO: Waiting for pod var-expansion-acb414f8-6176-4d90-8495-1ca313816426 to disappear
Feb  6 14:02:24.284: INFO: Pod var-expansion-acb414f8-6176-4d90-8495-1ca313816426 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:02:24.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-917" for this suite.
Feb  6 14:02:30.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:02:30.518: INFO: namespace var-expansion-917 deletion completed in 6.224967936s

• [SLOW TEST:16.690 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:02:30.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  6 14:02:30.653: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:02:43.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5748" for this suite.
Feb  6 14:02:49.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:02:50.038: INFO: namespace init-container-5748 deletion completed in 6.143816896s

• [SLOW TEST:19.520 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:02:50.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  6 14:02:50.208: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5749,SelfLink:/api/v1/namespaces/watch-5749/configmaps/e2e-watch-test-resource-version,UID:2da8e762-a943-437b-a581-352ea93816e7,ResourceVersion:23325122,Generation:0,CreationTimestamp:2020-02-06 14:02:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 14:02:50.208: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5749,SelfLink:/api/v1/namespaces/watch-5749/configmaps/e2e-watch-test-resource-version,UID:2da8e762-a943-437b-a581-352ea93816e7,ResourceVersion:23325123,Generation:0,CreationTimestamp:2020-02-06 14:02:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:02:50.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5749" for this suite.
Feb  6 14:02:56.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:02:56.449: INFO: namespace watch-5749 deletion completed in 6.236071248s

• [SLOW TEST:6.410 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:02:56.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  6 14:02:56.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3584'
Feb  6 14:02:58.648: INFO: stderr: ""
Feb  6 14:02:58.648: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  6 14:02:59.660: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:02:59.660: INFO: Found 0 / 1
Feb  6 14:03:00.657: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:00.657: INFO: Found 0 / 1
Feb  6 14:03:01.661: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:01.661: INFO: Found 0 / 1
Feb  6 14:03:02.668: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:02.668: INFO: Found 0 / 1
Feb  6 14:03:03.659: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:03.659: INFO: Found 0 / 1
Feb  6 14:03:04.663: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:04.663: INFO: Found 0 / 1
Feb  6 14:03:05.663: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:05.663: INFO: Found 0 / 1
Feb  6 14:03:06.659: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:06.659: INFO: Found 0 / 1
Feb  6 14:03:07.670: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:07.670: INFO: Found 1 / 1
Feb  6 14:03:07.670: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb  6 14:03:07.677: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:07.677: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  6 14:03:07.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-kgmmr --namespace=kubectl-3584 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb  6 14:03:07.810: INFO: stderr: ""
Feb  6 14:03:07.810: INFO: stdout: "pod/redis-master-kgmmr patched\n"
STEP: checking annotations
Feb  6 14:03:07.826: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:03:07.826: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:03:07.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3584" for this suite.
Feb  6 14:03:29.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:03:30.002: INFO: namespace kubectl-3584 deletion completed in 22.168951964s

• [SLOW TEST:33.553 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:03:30.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:03:30.174: INFO: Create a RollingUpdate DaemonSet
Feb  6 14:03:30.183: INFO: Check that daemon pods launch on every node of the cluster
Feb  6 14:03:30.212: INFO: Number of nodes with available pods: 0
Feb  6 14:03:30.212: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:31.880: INFO: Number of nodes with available pods: 0
Feb  6 14:03:31.880: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:32.386: INFO: Number of nodes with available pods: 0
Feb  6 14:03:32.386: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:33.582: INFO: Number of nodes with available pods: 0
Feb  6 14:03:33.582: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:34.237: INFO: Number of nodes with available pods: 0
Feb  6 14:03:34.237: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:35.225: INFO: Number of nodes with available pods: 0
Feb  6 14:03:35.225: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:37.130: INFO: Number of nodes with available pods: 0
Feb  6 14:03:37.130: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:37.224: INFO: Number of nodes with available pods: 0
Feb  6 14:03:37.224: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:38.370: INFO: Number of nodes with available pods: 0
Feb  6 14:03:38.370: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:39.300: INFO: Number of nodes with available pods: 0
Feb  6 14:03:39.300: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:40.235: INFO: Number of nodes with available pods: 1
Feb  6 14:03:40.235: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:03:41.230: INFO: Number of nodes with available pods: 2
Feb  6 14:03:41.230: INFO: Number of running nodes: 2, number of available pods: 2
Feb  6 14:03:41.230: INFO: Update the DaemonSet to trigger a rollout
Feb  6 14:03:41.253: INFO: Updating DaemonSet daemon-set
Feb  6 14:03:58.295: INFO: Roll back the DaemonSet before rollout is complete
Feb  6 14:03:58.303: INFO: Updating DaemonSet daemon-set
Feb  6 14:03:58.303: INFO: Make sure DaemonSet rollback is complete
Feb  6 14:03:58.311: INFO: Wrong image for pod: daemon-set-b5xgl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  6 14:03:58.311: INFO: Pod daemon-set-b5xgl is not available
Feb  6 14:03:59.421: INFO: Wrong image for pod: daemon-set-b5xgl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  6 14:03:59.421: INFO: Pod daemon-set-b5xgl is not available
Feb  6 14:04:00.371: INFO: Wrong image for pod: daemon-set-b5xgl. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  6 14:04:00.371: INFO: Pod daemon-set-b5xgl is not available
Feb  6 14:04:01.335: INFO: Pod daemon-set-c9btr is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3380, will wait for the garbage collector to delete the pods
Feb  6 14:04:01.598: INFO: Deleting DaemonSet.extensions daemon-set took: 169.185218ms
Feb  6 14:04:01.998: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.367057ms
Feb  6 14:04:08.905: INFO: Number of nodes with available pods: 0
Feb  6 14:04:08.905: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 14:04:08.913: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3380/daemonsets","resourceVersion":"23325339"},"items":null}

Feb  6 14:04:08.918: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3380/pods","resourceVersion":"23325339"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:04:08.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3380" for this suite.
Feb  6 14:04:14.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:04:15.084: INFO: namespace daemonsets-3380 deletion completed in 6.148552185s

• [SLOW TEST:45.082 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:04:15.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 14:04:15.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3503'
Feb  6 14:04:15.393: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 14:04:15.393: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  6 14:04:15.416: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-rs5hr]
Feb  6 14:04:15.416: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-rs5hr" in namespace "kubectl-3503" to be "running and ready"
Feb  6 14:04:15.434: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.70009ms
Feb  6 14:04:17.445: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028201142s
Feb  6 14:04:19.458: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041568383s
Feb  6 14:04:21.469: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052259554s
Feb  6 14:04:23.476: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059320175s
Feb  6 14:04:25.486: INFO: Pod "e2e-test-nginx-rc-rs5hr": Phase="Running", Reason="", readiness=true. Elapsed: 10.069385392s
Feb  6 14:04:25.486: INFO: Pod "e2e-test-nginx-rc-rs5hr" satisfied condition "running and ready"
Feb  6 14:04:25.486: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-rs5hr]
Feb  6 14:04:25.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3503'
Feb  6 14:04:25.739: INFO: stderr: ""
Feb  6 14:04:25.739: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  6 14:04:25.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3503'
Feb  6 14:04:25.870: INFO: stderr: ""
Feb  6 14:04:25.870: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:04:25.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3503" for this suite.
Feb  6 14:04:47.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:04:48.254: INFO: namespace kubectl-3503 deletion completed in 22.376537377s

• [SLOW TEST:33.169 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:04:48.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  6 14:04:48.495: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  6 14:04:53.514: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:04:54.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3775" for this suite.
Feb  6 14:05:00.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:05:00.749: INFO: namespace replication-controller-3775 deletion completed in 6.160434715s

• [SLOW TEST:12.495 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:05:00.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  6 14:05:00.935: INFO: Waiting up to 5m0s for pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e" in namespace "emptydir-3956" to be "success or failure"
Feb  6 14:05:00.947: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.268219ms
Feb  6 14:05:02.961: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02588488s
Feb  6 14:05:04.972: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036598736s
Feb  6 14:05:06.985: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04991651s
Feb  6 14:05:08.999: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064278301s
Feb  6 14:05:11.008: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072585738s
Feb  6 14:05:13.015: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.079754609s
Feb  6 14:05:15.022: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.087026814s
STEP: Saw pod success
Feb  6 14:05:15.022: INFO: Pod "pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e" satisfied condition "success or failure"
Feb  6 14:05:15.027: INFO: Trying to get logs from node iruya-node pod pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e container test-container: 
STEP: delete the pod
Feb  6 14:05:15.179: INFO: Waiting for pod pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e to disappear
Feb  6 14:05:15.185: INFO: Pod pod-61c13fef-6036-4a24-82fd-1e4d5360dc9e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:05:15.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3956" for this suite.
Feb  6 14:05:21.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:05:21.426: INFO: namespace emptydir-3956 deletion completed in 6.234141747s

• [SLOW TEST:20.677 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:05:21.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:05:21.583: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16" in namespace "downward-api-2949" to be "success or failure"
Feb  6 14:05:21.590: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 7.00494ms
Feb  6 14:05:23.601: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018735304s
Feb  6 14:05:25.609: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026037122s
Feb  6 14:05:27.620: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03757259s
Feb  6 14:05:29.699: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Pending", Reason="", readiness=false. Elapsed: 8.115937311s
Feb  6 14:05:31.710: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126869492s
STEP: Saw pod success
Feb  6 14:05:31.710: INFO: Pod "downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16" satisfied condition "success or failure"
Feb  6 14:05:31.714: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16 container client-container: 
STEP: delete the pod
Feb  6 14:05:31.818: INFO: Waiting for pod downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16 to disappear
Feb  6 14:05:31.932: INFO: Pod downwardapi-volume-42441bb3-a210-4369-9186-6506074e7b16 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:05:31.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2949" for this suite.
Feb  6 14:05:37.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:05:38.129: INFO: namespace downward-api-2949 deletion completed in 6.187289552s

• [SLOW TEST:16.702 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:05:38.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  6 14:05:38.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2335'
Feb  6 14:05:38.519: INFO: stderr: ""
Feb  6 14:05:38.519: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 14:05:38.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:05:38.637: INFO: stderr: ""
Feb  6 14:05:38.637: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
Feb  6 14:05:38.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:38.769: INFO: stderr: ""
Feb  6 14:05:38.769: INFO: stdout: ""
Feb  6 14:05:38.769: INFO: update-demo-nautilus-2d7d7 is created but not running
Feb  6 14:05:43.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:05:43.971: INFO: stderr: ""
Feb  6 14:05:43.971: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
Feb  6 14:05:43.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:44.088: INFO: stderr: ""
Feb  6 14:05:44.088: INFO: stdout: ""
Feb  6 14:05:44.088: INFO: update-demo-nautilus-2d7d7 is created but not running
Feb  6 14:05:49.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:05:49.183: INFO: stderr: ""
Feb  6 14:05:49.183: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
Feb  6 14:05:49.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:49.286: INFO: stderr: ""
Feb  6 14:05:49.286: INFO: stdout: "true"
Feb  6 14:05:49.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:49.403: INFO: stderr: ""
Feb  6 14:05:49.403: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:05:49.403: INFO: validating pod update-demo-nautilus-2d7d7
Feb  6 14:05:49.427: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:05:49.428: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:05:49.428: INFO: update-demo-nautilus-2d7d7 is verified up and running
Feb  6 14:05:49.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zqc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:49.502: INFO: stderr: ""
Feb  6 14:05:49.502: INFO: stdout: "true"
Feb  6 14:05:49.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7zqc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:05:49.611: INFO: stderr: ""
Feb  6 14:05:49.611: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:05:49.611: INFO: validating pod update-demo-nautilus-7zqc4
Feb  6 14:05:49.617: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:05:49.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:05:49.617: INFO: update-demo-nautilus-7zqc4 is verified up and running
STEP: scaling down the replication controller
Feb  6 14:05:49.619: INFO: scanned /root for discovery docs: 
Feb  6 14:05:49.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2335'
Feb  6 14:05:50.779: INFO: stderr: ""
Feb  6 14:05:50.780: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 14:05:50.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:05:50.893: INFO: stderr: ""
Feb  6 14:05:50.893: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 14:05:55.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:05:56.040: INFO: stderr: ""
Feb  6 14:05:56.040: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 14:06:01.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:06:01.213: INFO: stderr: ""
Feb  6 14:06:01.213: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 14:06:06.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:06:06.363: INFO: stderr: ""
Feb  6 14:06:06.363: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-7zqc4 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  6 14:06:11.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:06:11.534: INFO: stderr: ""
Feb  6 14:06:11.534: INFO: stdout: "update-demo-nautilus-2d7d7 "
Feb  6 14:06:11.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:11.662: INFO: stderr: ""
Feb  6 14:06:11.662: INFO: stdout: "true"
Feb  6 14:06:11.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:11.798: INFO: stderr: ""
Feb  6 14:06:11.798: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:06:11.798: INFO: validating pod update-demo-nautilus-2d7d7
Feb  6 14:06:11.816: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:06:11.816: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:06:11.816: INFO: update-demo-nautilus-2d7d7 is verified up and running
STEP: scaling up the replication controller
Feb  6 14:06:11.818: INFO: scanned /root for discovery docs: 
Feb  6 14:06:11.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2335'
Feb  6 14:06:13.339: INFO: stderr: ""
Feb  6 14:06:13.339: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  6 14:06:13.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:06:13.693: INFO: stderr: ""
Feb  6 14:06:13.693: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-6xgth "
Feb  6 14:06:13.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:13.830: INFO: stderr: ""
Feb  6 14:06:13.830: INFO: stdout: "true"
Feb  6 14:06:13.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:14.017: INFO: stderr: ""
Feb  6 14:06:14.017: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:06:14.017: INFO: validating pod update-demo-nautilus-2d7d7
Feb  6 14:06:14.036: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:06:14.036: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:06:14.036: INFO: update-demo-nautilus-2d7d7 is verified up and running
Feb  6 14:06:14.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xgth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:14.171: INFO: stderr: ""
Feb  6 14:06:14.171: INFO: stdout: ""
Feb  6 14:06:14.171: INFO: update-demo-nautilus-6xgth is created but not running
Feb  6 14:06:19.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2335'
Feb  6 14:06:19.332: INFO: stderr: ""
Feb  6 14:06:19.332: INFO: stdout: "update-demo-nautilus-2d7d7 update-demo-nautilus-6xgth "
Feb  6 14:06:19.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:19.451: INFO: stderr: ""
Feb  6 14:06:19.451: INFO: stdout: "true"
Feb  6 14:06:19.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2d7d7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:19.594: INFO: stderr: ""
Feb  6 14:06:19.594: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:06:19.594: INFO: validating pod update-demo-nautilus-2d7d7
Feb  6 14:06:19.602: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:06:19.602: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:06:19.602: INFO: update-demo-nautilus-2d7d7 is verified up and running
Feb  6 14:06:19.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xgth -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:19.677: INFO: stderr: ""
Feb  6 14:06:19.677: INFO: stdout: "true"
Feb  6 14:06:19.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6xgth -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2335'
Feb  6 14:06:19.766: INFO: stderr: ""
Feb  6 14:06:19.766: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  6 14:06:19.766: INFO: validating pod update-demo-nautilus-6xgth
Feb  6 14:06:19.773: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  6 14:06:19.773: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  6 14:06:19.773: INFO: update-demo-nautilus-6xgth is verified up and running
STEP: using delete to clean up resources
Feb  6 14:06:19.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2335'
Feb  6 14:06:19.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 14:06:19.892: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  6 14:06:19.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2335'
Feb  6 14:06:19.990: INFO: stderr: "No resources found.\n"
Feb  6 14:06:19.990: INFO: stdout: ""
Feb  6 14:06:19.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2335 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 14:06:20.057: INFO: stderr: ""
Feb  6 14:06:20.057: INFO: stdout: "update-demo-nautilus-2d7d7\nupdate-demo-nautilus-6xgth\n"
Feb  6 14:06:20.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2335'
Feb  6 14:06:20.737: INFO: stderr: "No resources found.\n"
Feb  6 14:06:20.737: INFO: stdout: ""
Feb  6 14:06:20.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2335 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 14:06:20.890: INFO: stderr: ""
Feb  6 14:06:20.890: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:06:20.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2335" for this suite.
Feb  6 14:06:44.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:06:45.031: INFO: namespace kubectl-2335 deletion completed in 24.132714924s

• [SLOW TEST:66.902 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:06:45.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  6 14:06:45.153: INFO: Waiting up to 5m0s for pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7" in namespace "emptydir-3302" to be "success or failure"
Feb  6 14:06:45.159: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.685333ms
Feb  6 14:06:47.166: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012120385s
Feb  6 14:06:49.173: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019204041s
Feb  6 14:06:51.179: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025497793s
Feb  6 14:06:53.188: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Running", Reason="", readiness=true. Elapsed: 8.034045435s
Feb  6 14:06:55.197: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.043729154s
STEP: Saw pod success
Feb  6 14:06:55.197: INFO: Pod "pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7" satisfied condition "success or failure"
Feb  6 14:06:55.201: INFO: Trying to get logs from node iruya-node pod pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7 container test-container: 
STEP: delete the pod
Feb  6 14:06:55.312: INFO: Waiting for pod pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7 to disappear
Feb  6 14:06:55.319: INFO: Pod pod-78476bc9-6c92-4c96-9d1a-16c824a0e6c7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:06:55.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3302" for this suite.
Feb  6 14:07:01.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:07:01.478: INFO: namespace emptydir-3302 deletion completed in 6.151676099s

• [SLOW TEST:16.447 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:07:01.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb  6 14:07:01.552: INFO: Waiting up to 5m0s for pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef" in namespace "var-expansion-2037" to be "success or failure"
Feb  6 14:07:01.637: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Pending", Reason="", readiness=false. Elapsed: 85.726518ms
Feb  6 14:07:04.159: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.607368456s
Feb  6 14:07:06.165: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.613501171s
Feb  6 14:07:08.170: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.618849898s
Feb  6 14:07:10.185: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.632892073s
Feb  6 14:07:12.197: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.645649494s
STEP: Saw pod success
Feb  6 14:07:12.197: INFO: Pod "var-expansion-2713ccec-d075-4414-bb81-242f17b39eef" satisfied condition "success or failure"
Feb  6 14:07:12.201: INFO: Trying to get logs from node iruya-node pod var-expansion-2713ccec-d075-4414-bb81-242f17b39eef container dapi-container: 
STEP: delete the pod
Feb  6 14:07:12.295: INFO: Waiting for pod var-expansion-2713ccec-d075-4414-bb81-242f17b39eef to disappear
Feb  6 14:07:12.331: INFO: Pod var-expansion-2713ccec-d075-4414-bb81-242f17b39eef no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:07:12.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2037" for this suite.
Feb  6 14:07:18.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:07:18.584: INFO: namespace var-expansion-2037 deletion completed in 6.244157661s

• [SLOW TEST:17.106 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:07:18.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8f52ab85-f401-40e9-a569-da6e1d8313f8
STEP: Creating a pod to test consume configMaps
Feb  6 14:07:18.728: INFO: Waiting up to 5m0s for pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369" in namespace "configmap-4734" to be "success or failure"
Feb  6 14:07:18.758: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369": Phase="Pending", Reason="", readiness=false. Elapsed: 30.729899ms
Feb  6 14:07:20.768: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040305148s
Feb  6 14:07:22.777: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049737011s
Feb  6 14:07:24.785: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057673113s
Feb  6 14:07:26.839: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111046176s
STEP: Saw pod success
Feb  6 14:07:26.839: INFO: Pod "pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369" satisfied condition "success or failure"
Feb  6 14:07:26.845: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369 container configmap-volume-test: 
STEP: delete the pod
Feb  6 14:07:27.063: INFO: Waiting for pod pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369 to disappear
Feb  6 14:07:27.076: INFO: Pod pod-configmaps-1fa94b3b-c0d4-45de-b021-45f343770369 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:07:27.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4734" for this suite.
Feb  6 14:07:33.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:07:33.275: INFO: namespace configmap-4734 deletion completed in 6.189784426s

• [SLOW TEST:14.691 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:07:33.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  6 14:07:44.486: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:07:45.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4686" for this suite.
Feb  6 14:08:07.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:08:07.663: INFO: namespace replicaset-4686 deletion completed in 22.139677452s

• [SLOW TEST:34.388 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:08:07.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  6 14:11:12.102: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:12.228: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:14.229: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:15.724: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:16.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:16.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:18.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:18.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:20.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:20.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:22.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:22.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:24.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:24.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:26.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:26.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:28.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:28.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:30.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:30.603: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:32.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:32.244: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:34.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:34.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:36.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:36.242: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:38.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:38.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:40.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:40.519: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:42.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:42.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:44.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:44.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:46.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:46.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:48.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:48.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:50.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:50.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:52.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:52.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:54.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:54.281: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:56.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:56.233: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:11:58.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:11:58.245: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:00.229: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:00.243: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:02.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:02.250: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:04.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:04.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:06.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:06.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:08.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:08.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:10.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:10.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:12.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:12.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:14.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:14.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:16.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:16.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:18.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:18.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:20.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:20.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:22.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:22.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:24.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:24.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:26.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:26.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:28.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:28.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:30.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:30.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:32.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:32.233: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:34.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:34.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:36.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:36.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:38.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:38.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:40.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:40.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:42.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:42.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:44.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:44.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:46.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:46.239: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:48.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:48.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:50.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:50.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:52.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:52.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:54.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:54.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:56.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:56.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  6 14:12:58.228: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  6 14:12:58.236: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:12:58.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3111" for this suite.
Feb  6 14:13:22.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:13:22.400: INFO: namespace container-lifecycle-hook-3111 deletion completed in 24.15868077s

• [SLOW TEST:314.737 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:13:22.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:13:22.548: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  6 14:13:27.555: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  6 14:13:31.567: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  6 14:13:31.663: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8304,SelfLink:/apis/apps/v1/namespaces/deployment-8304/deployments/test-cleanup-deployment,UID:37d5ef39-9f1d-4041-a9b5-f1fc3c4aa9cd,ResourceVersion:23326488,Generation:1,CreationTimestamp:2020-02-06 14:13:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  6 14:13:31.673: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-8304,SelfLink:/apis/apps/v1/namespaces/deployment-8304/replicasets/test-cleanup-deployment-55bbcbc84c,UID:5a311aea-bf74-462c-a5fa-d17448cba435,ResourceVersion:23326490,Generation:1,CreationTimestamp:2020-02-06 14:13:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 37d5ef39-9f1d-4041-a9b5-f1fc3c4aa9cd 0xc000c24d87 0xc000c24d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 14:13:31.673: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  6 14:13:31.673: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8304,SelfLink:/apis/apps/v1/namespaces/deployment-8304/replicasets/test-cleanup-controller,UID:80a2b60c-0798-4c4d-9bf2-6b1797af903c,ResourceVersion:23326489,Generation:1,CreationTimestamp:2020-02-06 14:13:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 37d5ef39-9f1d-4041-a9b5-f1fc3c4aa9cd 0xc000c24c87 0xc000c24c88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  6 14:13:31.693: INFO: Pod "test-cleanup-controller-bcxhr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bcxhr,GenerateName:test-cleanup-controller-,Namespace:deployment-8304,SelfLink:/api/v1/namespaces/deployment-8304/pods/test-cleanup-controller-bcxhr,UID:93e5e3bf-2ac0-4356-a1db-521b1f65076a,ResourceVersion:23326484,Generation:0,CreationTimestamp:2020-02-06 14:13:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 80a2b60c-0798-4c4d-9bf2-6b1797af903c 0xc000c25797 0xc000c25798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vslcf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vslcf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-vslcf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c25820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c25840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:13:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:13:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:13:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:13:22 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-06 14:13:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 14:13:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d7e678a58ba5edd3a5b02fd52e57718da7ce2e7b329f74cdb0703fcb159ca9ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 14:13:31.694: INFO: Pod "test-cleanup-deployment-55bbcbc84c-rfqrl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-rfqrl,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-8304,SelfLink:/api/v1/namespaces/deployment-8304/pods/test-cleanup-deployment-55bbcbc84c-rfqrl,UID:a5c9fc34-11a5-4e10-92ac-89e09f247009,ResourceVersion:23326491,Generation:0,CreationTimestamp:2020-02-06 14:13:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 5a311aea-bf74-462c-a5fa-d17448cba435 0xc000c25937 0xc000c25938}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vslcf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vslcf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-vslcf true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000c259a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000c259c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:13:31.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8304" for this suite.
Feb  6 14:13:39.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:13:39.967: INFO: namespace deployment-8304 deletion completed in 8.255257665s

• [SLOW TEST:17.567 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:13:39.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:13:50.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2741" for this suite.
Feb  6 14:14:42.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:14:43.032: INFO: namespace kubelet-test-2741 deletion completed in 52.18617307s

• [SLOW TEST:63.064 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:14:43.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zn87
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 14:14:43.124: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zn87" in namespace "subpath-8350" to be "success or failure"
Feb  6 14:14:43.180: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Pending", Reason="", readiness=false. Elapsed: 55.247199ms
Feb  6 14:14:45.203: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078020477s
Feb  6 14:14:47.210: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085662747s
Feb  6 14:14:49.217: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092853078s
Feb  6 14:14:51.224: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099788907s
Feb  6 14:14:53.233: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 10.10889812s
Feb  6 14:14:55.242: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 12.117947547s
Feb  6 14:14:57.254: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 14.129174547s
Feb  6 14:14:59.263: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 16.138116029s
Feb  6 14:15:01.275: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 18.15001366s
Feb  6 14:15:03.286: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 20.161175162s
Feb  6 14:15:05.294: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 22.169625005s
Feb  6 14:15:07.302: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 24.177434956s
Feb  6 14:15:09.311: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 26.186606001s
Feb  6 14:15:11.319: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Running", Reason="", readiness=true. Elapsed: 28.194437823s
Feb  6 14:15:13.327: INFO: Pod "pod-subpath-test-configmap-zn87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.202412849s
STEP: Saw pod success
Feb  6 14:15:13.327: INFO: Pod "pod-subpath-test-configmap-zn87" satisfied condition "success or failure"
Feb  6 14:15:13.331: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zn87 container test-container-subpath-configmap-zn87: 
STEP: delete the pod
Feb  6 14:15:13.629: INFO: Waiting for pod pod-subpath-test-configmap-zn87 to disappear
Feb  6 14:15:13.637: INFO: Pod pod-subpath-test-configmap-zn87 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zn87
Feb  6 14:15:13.637: INFO: Deleting pod "pod-subpath-test-configmap-zn87" in namespace "subpath-8350"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:15:13.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8350" for this suite.
Feb  6 14:15:23.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:15:23.876: INFO: namespace subpath-8350 deletion completed in 10.229254834s

• [SLOW TEST:40.844 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:15:23.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-bmhr
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 14:15:24.170: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bmhr" in namespace "subpath-2450" to be "success or failure"
Feb  6 14:15:24.289: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Pending", Reason="", readiness=false. Elapsed: 118.831939ms
Feb  6 14:15:26.295: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124599366s
Feb  6 14:15:28.302: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131922767s
Feb  6 14:15:30.312: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141615964s
Feb  6 14:15:32.319: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 8.148959041s
Feb  6 14:15:34.330: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 10.159459532s
Feb  6 14:15:36.337: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 12.166492965s
Feb  6 14:15:38.366: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 14.19582672s
Feb  6 14:15:40.374: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 16.203428982s
Feb  6 14:15:42.386: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 18.215252398s
Feb  6 14:15:44.398: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 20.227080208s
Feb  6 14:15:46.411: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 22.24010252s
Feb  6 14:15:48.420: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 24.249658922s
Feb  6 14:15:50.431: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 26.260580119s
Feb  6 14:15:52.441: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Running", Reason="", readiness=true. Elapsed: 28.270599962s
Feb  6 14:15:54.458: INFO: Pod "pod-subpath-test-secret-bmhr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.287048194s
STEP: Saw pod success
Feb  6 14:15:54.458: INFO: Pod "pod-subpath-test-secret-bmhr" satisfied condition "success or failure"
Feb  6 14:15:54.464: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-bmhr container test-container-subpath-secret-bmhr: 
STEP: delete the pod
Feb  6 14:15:54.571: INFO: Waiting for pod pod-subpath-test-secret-bmhr to disappear
Feb  6 14:15:54.579: INFO: Pod pod-subpath-test-secret-bmhr no longer exists
STEP: Deleting pod pod-subpath-test-secret-bmhr
Feb  6 14:15:54.579: INFO: Deleting pod "pod-subpath-test-secret-bmhr" in namespace "subpath-2450"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:15:54.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2450" for this suite.
Feb  6 14:16:00.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:16:00.725: INFO: namespace subpath-2450 deletion completed in 6.134569033s

• [SLOW TEST:36.848 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:16:00.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:16:00.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:16:09.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1976" for this suite.
Feb  6 14:17:01.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:17:01.596: INFO: namespace pods-1976 deletion completed in 52.180263134s

• [SLOW TEST:60.870 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:17:01.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  6 14:17:01.740: INFO: Number of nodes with available pods: 0
Feb  6 14:17:01.740: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:03.925: INFO: Number of nodes with available pods: 0
Feb  6 14:17:03.925: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:05.393: INFO: Number of nodes with available pods: 0
Feb  6 14:17:05.393: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:05.755: INFO: Number of nodes with available pods: 0
Feb  6 14:17:05.755: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:06.789: INFO: Number of nodes with available pods: 0
Feb  6 14:17:06.789: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:08.717: INFO: Number of nodes with available pods: 0
Feb  6 14:17:08.717: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:09.573: INFO: Number of nodes with available pods: 0
Feb  6 14:17:09.573: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:09.762: INFO: Number of nodes with available pods: 0
Feb  6 14:17:09.762: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:10.810: INFO: Number of nodes with available pods: 0
Feb  6 14:17:10.810: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:11.754: INFO: Number of nodes with available pods: 0
Feb  6 14:17:11.754: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:12.752: INFO: Number of nodes with available pods: 1
Feb  6 14:17:12.752: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:13.757: INFO: Number of nodes with available pods: 2
Feb  6 14:17:13.757: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  6 14:17:13.827: INFO: Number of nodes with available pods: 1
Feb  6 14:17:13.827: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:14.848: INFO: Number of nodes with available pods: 1
Feb  6 14:17:14.848: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:15.849: INFO: Number of nodes with available pods: 1
Feb  6 14:17:15.850: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:16.851: INFO: Number of nodes with available pods: 1
Feb  6 14:17:16.851: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:17.842: INFO: Number of nodes with available pods: 1
Feb  6 14:17:17.842: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:18.841: INFO: Number of nodes with available pods: 1
Feb  6 14:17:18.841: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:19.847: INFO: Number of nodes with available pods: 1
Feb  6 14:17:19.847: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:20.845: INFO: Number of nodes with available pods: 1
Feb  6 14:17:20.845: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:21.866: INFO: Number of nodes with available pods: 1
Feb  6 14:17:21.866: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:22.845: INFO: Number of nodes with available pods: 1
Feb  6 14:17:22.845: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:23.846: INFO: Number of nodes with available pods: 1
Feb  6 14:17:23.846: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:24.841: INFO: Number of nodes with available pods: 1
Feb  6 14:17:24.841: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:25.841: INFO: Number of nodes with available pods: 1
Feb  6 14:17:25.841: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:26.839: INFO: Number of nodes with available pods: 1
Feb  6 14:17:26.839: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:27.859: INFO: Number of nodes with available pods: 1
Feb  6 14:17:27.859: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:28.854: INFO: Number of nodes with available pods: 1
Feb  6 14:17:28.854: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:29.844: INFO: Number of nodes with available pods: 1
Feb  6 14:17:29.844: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:30.848: INFO: Number of nodes with available pods: 1
Feb  6 14:17:30.848: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:31.845: INFO: Number of nodes with available pods: 1
Feb  6 14:17:31.845: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:32.842: INFO: Number of nodes with available pods: 1
Feb  6 14:17:32.842: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:33.838: INFO: Number of nodes with available pods: 1
Feb  6 14:17:33.838: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:17:34.837: INFO: Number of nodes with available pods: 2
Feb  6 14:17:34.837: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-706, will wait for the garbage collector to delete the pods
Feb  6 14:17:34.904: INFO: Deleting DaemonSet.extensions daemon-set took: 10.62764ms
Feb  6 14:17:35.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297007ms
Feb  6 14:17:47.917: INFO: Number of nodes with available pods: 0
Feb  6 14:17:47.917: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 14:17:47.923: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-706/daemonsets","resourceVersion":"23327019"},"items":null}

Feb  6 14:17:47.927: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-706/pods","resourceVersion":"23327019"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:17:47.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-706" for this suite.
Feb  6 14:17:53.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:17:54.110: INFO: namespace daemonsets-706 deletion completed in 6.160131826s

• [SLOW TEST:52.514 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:17:54.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-c242db34-6977-4c98-a025-3858776624ee in namespace container-probe-1528
Feb  6 14:18:02.240: INFO: Started pod liveness-c242db34-6977-4c98-a025-3858776624ee in namespace container-probe-1528
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 14:18:02.246: INFO: Initial restart count of pod liveness-c242db34-6977-4c98-a025-3858776624ee is 0
Feb  6 14:18:20.422: INFO: Restart count of pod container-probe-1528/liveness-c242db34-6977-4c98-a025-3858776624ee is now 1 (18.176147268s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:18:20.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1528" for this suite.
Feb  6 14:18:26.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:18:26.747: INFO: namespace container-probe-1528 deletion completed in 6.189228001s

• [SLOW TEST:32.637 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:18:26.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0206 14:18:35.747602       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 14:18:35.747: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:18:35.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1716" for this suite.
Feb  6 14:18:47.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:18:48.120: INFO: namespace gc-1716 deletion completed in 12.368535296s

• [SLOW TEST:21.373 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:18:48.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca
Feb  6 14:18:48.281: INFO: Pod name my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca: Found 0 pods out of 1
Feb  6 14:18:53.290: INFO: Pod name my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca: Found 1 pods out of 1
Feb  6 14:18:53.290: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca" are running
Feb  6 14:18:57.307: INFO: Pod "my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca-xxhpz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 14:18:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 14:18:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 14:18:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-06 14:18:48 +0000 UTC Reason: Message:}])
Feb  6 14:18:57.308: INFO: Trying to dial the pod
Feb  6 14:19:02.332: INFO: Controller my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca: Got expected result from replica 1 [my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca-xxhpz]: "my-hostname-basic-82c3d9d7-ad87-4eb9-8af0-e69d40b7eeca-xxhpz", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:19:02.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7800" for this suite.
Feb  6 14:19:08.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:19:08.482: INFO: namespace replication-controller-7800 deletion completed in 6.141939747s

• [SLOW TEST:20.362 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:19:08.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  6 14:19:08.638: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  6 14:19:09.107: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  6 14:19:11.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:13.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:15.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:17.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:19.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:21.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:23.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716595549, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:19:29.558: INFO: Waited 4.078460844s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:19:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4767" for this suite.
Feb  6 14:19:36.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:19:36.538: INFO: namespace aggregator-4767 deletion completed in 6.206410012s

• [SLOW TEST:28.056 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:19:36.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-37c362d7-2622-4d1d-b18b-453847aaa5f7
STEP: Creating a pod to test consume secrets
Feb  6 14:19:36.692: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece" in namespace "projected-5765" to be "success or failure"
Feb  6 14:19:36.699: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece": Phase="Pending", Reason="", readiness=false. Elapsed: 7.761335ms
Feb  6 14:19:38.710: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017789705s
Feb  6 14:19:40.735: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043564697s
Feb  6 14:19:42.743: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051441231s
Feb  6 14:19:44.750: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058698892s
STEP: Saw pod success
Feb  6 14:19:44.750: INFO: Pod "pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece" satisfied condition "success or failure"
Feb  6 14:19:44.753: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 14:19:44.824: INFO: Waiting for pod pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece to disappear
Feb  6 14:19:44.836: INFO: Pod pod-projected-secrets-796756f9-1c60-4a42-96b0-6f8381d94ece no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:19:44.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5765" for this suite.
Feb  6 14:19:50.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:19:51.004: INFO: namespace projected-5765 deletion completed in 6.160936766s

• [SLOW TEST:14.465 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:19:51.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:20:46.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6103" for this suite.
Feb  6 14:20:52.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:20:52.925: INFO: namespace container-runtime-6103 deletion completed in 6.232875783s

• [SLOW TEST:61.921 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:20:52.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  6 14:20:52.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6643'
Feb  6 14:20:55.701: INFO: stderr: ""
Feb  6 14:20:55.701: INFO: stdout: "pod/pause created\n"
Feb  6 14:20:55.701: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  6 14:20:55.701: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6643" to be "running and ready"
Feb  6 14:20:55.717: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.786456ms
Feb  6 14:20:57.724: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023560714s
Feb  6 14:20:59.738: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037646014s
Feb  6 14:21:01.751: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049704818s
Feb  6 14:21:03.771: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.069843665s
Feb  6 14:21:03.771: INFO: Pod "pause" satisfied condition "running and ready"
Feb  6 14:21:03.771: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  6 14:21:03.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6643'
Feb  6 14:21:03.944: INFO: stderr: ""
Feb  6 14:21:03.944: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  6 14:21:03.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6643'
Feb  6 14:21:04.044: INFO: stderr: ""
Feb  6 14:21:04.044: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  6 14:21:04.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6643'
Feb  6 14:21:04.203: INFO: stderr: ""
Feb  6 14:21:04.203: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  6 14:21:04.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6643'
Feb  6 14:21:04.283: INFO: stderr: ""
Feb  6 14:21:04.283: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  6 14:21:04.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6643'
Feb  6 14:21:04.396: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 14:21:04.396: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  6 14:21:04.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6643'
Feb  6 14:21:04.499: INFO: stderr: "No resources found.\n"
Feb  6 14:21:04.499: INFO: stdout: ""
Feb  6 14:21:04.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6643 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 14:21:04.606: INFO: stderr: ""
Feb  6 14:21:04.606: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:21:04.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6643" for this suite.
Feb  6 14:21:10.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:21:10.774: INFO: namespace kubectl-6643 deletion completed in 6.159627421s

• [SLOW TEST:17.849 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:21:10.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  6 14:21:10.828: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:21:29.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8818" for this suite.
Feb  6 14:21:51.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:21:51.432: INFO: namespace init-container-8818 deletion completed in 22.254610403s

• [SLOW TEST:40.658 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:21:51.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-afc7d3a0-bd7f-48e9-a276-cc38080954be
STEP: Creating a pod to test consume configMaps
Feb  6 14:21:51.573: INFO: Waiting up to 5m0s for pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c" in namespace "configmap-7902" to be "success or failure"
Feb  6 14:21:51.605: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.016812ms
Feb  6 14:21:53.617: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043651041s
Feb  6 14:21:55.629: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055868291s
Feb  6 14:21:57.639: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065759768s
Feb  6 14:21:59.648: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075148577s
Feb  6 14:22:01.658: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08533112s
Feb  6 14:22:03.671: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.097820917s
STEP: Saw pod success
Feb  6 14:22:03.671: INFO: Pod "pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c" satisfied condition "success or failure"
Feb  6 14:22:03.676: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c container configmap-volume-test: 
STEP: delete the pod
Feb  6 14:22:03.882: INFO: Waiting for pod pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c to disappear
Feb  6 14:22:03.895: INFO: Pod pod-configmaps-d50eb650-e57b-4eee-b62c-abf5cc8c6d2c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:22:03.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7902" for this suite.
Feb  6 14:22:10.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:22:10.217: INFO: namespace configmap-7902 deletion completed in 6.288857536s

• [SLOW TEST:18.785 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:22:10.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  6 14:22:10.317: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:22:24.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8215" for this suite.
Feb  6 14:22:30.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:22:30.333: INFO: namespace init-container-8215 deletion completed in 6.14146686s

• [SLOW TEST:20.115 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:22:30.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  6 14:22:30.449: INFO: Waiting up to 5m0s for pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10" in namespace "downward-api-6743" to be "success or failure"
Feb  6 14:22:30.463: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Pending", Reason="", readiness=false. Elapsed: 14.376599ms
Feb  6 14:22:32.474: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025558066s
Feb  6 14:22:34.488: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03886143s
Feb  6 14:22:36.505: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056301497s
Feb  6 14:22:38.520: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071192043s
Feb  6 14:22:40.536: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087174346s
STEP: Saw pod success
Feb  6 14:22:40.536: INFO: Pod "downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10" satisfied condition "success or failure"
Feb  6 14:22:40.542: INFO: Trying to get logs from node iruya-node pod downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10 container dapi-container: 
STEP: delete the pod
Feb  6 14:22:40.681: INFO: Waiting for pod downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10 to disappear
Feb  6 14:22:40.713: INFO: Pod downward-api-70229096-ec96-42ef-96f2-7b2abd93aa10 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:22:40.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6743" for this suite.
Feb  6 14:22:46.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:22:46.890: INFO: namespace downward-api-6743 deletion completed in 6.171532494s

• [SLOW TEST:16.557 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:22:46.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9688
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  6 14:22:47.048: INFO: Found 0 stateful pods, waiting for 3
Feb  6 14:22:57.063: INFO: Found 2 stateful pods, waiting for 3
Feb  6 14:23:07.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:23:07.062: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:23:07.062: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 14:23:17.059: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:23:17.059: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:23:17.059: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:23:17.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9688 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 14:23:17.662: INFO: stderr: "I0206 14:23:17.337331    3003 log.go:172] (0xc0008f2420) (0xc00028e6e0) Create stream\nI0206 14:23:17.337508    3003 log.go:172] (0xc0008f2420) (0xc00028e6e0) Stream added, broadcasting: 1\nI0206 14:23:17.395465    3003 log.go:172] (0xc0008f2420) Reply frame received for 1\nI0206 14:23:17.395520    3003 log.go:172] (0xc0008f2420) (0xc000676460) Create stream\nI0206 14:23:17.395534    3003 log.go:172] (0xc0008f2420) (0xc000676460) Stream added, broadcasting: 3\nI0206 14:23:17.396661    3003 log.go:172] (0xc0008f2420) Reply frame received for 3\nI0206 14:23:17.396698    3003 log.go:172] (0xc0008f2420) (0xc00028e780) Create stream\nI0206 14:23:17.396710    3003 log.go:172] (0xc0008f2420) (0xc00028e780) Stream added, broadcasting: 5\nI0206 14:23:17.398073    3003 log.go:172] (0xc0008f2420) Reply frame received for 5\nI0206 14:23:17.488821    3003 log.go:172] (0xc0008f2420) Data frame received for 5\nI0206 14:23:17.488893    3003 log.go:172] (0xc00028e780) (5) Data frame handling\nI0206 14:23:17.488929    3003 log.go:172] (0xc00028e780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 14:23:17.554951    3003 log.go:172] (0xc0008f2420) Data frame received for 3\nI0206 14:23:17.555153    3003 log.go:172] (0xc000676460) (3) Data frame handling\nI0206 14:23:17.555221    3003 log.go:172] (0xc000676460) (3) Data frame sent\nI0206 14:23:17.642990    3003 log.go:172] (0xc0008f2420) (0xc000676460) Stream removed, broadcasting: 3\nI0206 14:23:17.643181    3003 log.go:172] (0xc0008f2420) Data frame received for 1\nI0206 14:23:17.643215    3003 log.go:172] (0xc0008f2420) (0xc00028e780) Stream removed, broadcasting: 5\nI0206 14:23:17.643285    3003 log.go:172] (0xc00028e6e0) (1) Data frame handling\nI0206 14:23:17.643326    3003 log.go:172] (0xc00028e6e0) (1) Data frame sent\nI0206 14:23:17.643355    3003 log.go:172] (0xc0008f2420) (0xc00028e6e0) Stream removed, broadcasting: 1\nI0206 14:23:17.643384    3003 log.go:172] (0xc0008f2420) Go away received\nI0206 14:23:17.647178    3003 log.go:172] (0xc0008f2420) (0xc00028e6e0) Stream removed, broadcasting: 1\nI0206 14:23:17.647272    3003 log.go:172] (0xc0008f2420) (0xc000676460) Stream removed, broadcasting: 3\nI0206 14:23:17.647289    3003 log.go:172] (0xc0008f2420) (0xc00028e780) Stream removed, broadcasting: 5\n"
Feb  6 14:23:17.662: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 14:23:17.662: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  6 14:23:27.719: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  6 14:23:37.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9688 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 14:23:38.219: INFO: stderr: "I0206 14:23:38.041861    3024 log.go:172] (0xc000960370) (0xc0002a06e0) Create stream\nI0206 14:23:38.041982    3024 log.go:172] (0xc000960370) (0xc0002a06e0) Stream added, broadcasting: 1\nI0206 14:23:38.044306    3024 log.go:172] (0xc000960370) Reply frame received for 1\nI0206 14:23:38.044328    3024 log.go:172] (0xc000960370) (0xc0002a0780) Create stream\nI0206 14:23:38.044333    3024 log.go:172] (0xc000960370) (0xc0002a0780) Stream added, broadcasting: 3\nI0206 14:23:38.045153    3024 log.go:172] (0xc000960370) Reply frame received for 3\nI0206 14:23:38.045177    3024 log.go:172] (0xc000960370) (0xc00059c640) Create stream\nI0206 14:23:38.045187    3024 log.go:172] (0xc000960370) (0xc00059c640) Stream added, broadcasting: 5\nI0206 14:23:38.045777    3024 log.go:172] (0xc000960370) Reply frame received for 5\nI0206 14:23:38.126034    3024 log.go:172] (0xc000960370) Data frame received for 5\nI0206 14:23:38.126215    3024 log.go:172] (0xc00059c640) (5) Data frame handling\nI0206 14:23:38.126256    3024 log.go:172] (0xc00059c640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 14:23:38.128128    3024 log.go:172] (0xc000960370) Data frame received for 3\nI0206 14:23:38.128180    3024 log.go:172] (0xc0002a0780) (3) Data frame handling\nI0206 14:23:38.128262    3024 log.go:172] (0xc0002a0780) (3) Data frame sent\nI0206 14:23:38.212532    3024 log.go:172] (0xc000960370) (0xc0002a0780) Stream removed, broadcasting: 3\nI0206 14:23:38.212715    3024 log.go:172] (0xc000960370) Data frame received for 1\nI0206 14:23:38.212781    3024 log.go:172] (0xc0002a06e0) (1) Data frame handling\nI0206 14:23:38.212808    3024 log.go:172] (0xc0002a06e0) (1) Data frame sent\nI0206 14:23:38.212850    3024 log.go:172] (0xc000960370) (0xc00059c640) Stream removed, broadcasting: 5\nI0206 14:23:38.212939    3024 log.go:172] (0xc000960370) (0xc0002a06e0) Stream removed, broadcasting: 1\nI0206 14:23:38.212975    3024 log.go:172] (0xc000960370) Go away received\nI0206 14:23:38.213704    3024 log.go:172] (0xc000960370) (0xc0002a06e0) Stream removed, broadcasting: 1\nI0206 14:23:38.213726    3024 log.go:172] (0xc000960370) (0xc0002a0780) Stream removed, broadcasting: 3\nI0206 14:23:38.213736    3024 log.go:172] (0xc000960370) (0xc00059c640) Stream removed, broadcasting: 5\n"
Feb  6 14:23:38.220: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 14:23:38.220: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 14:23:48.249: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:23:48.249: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:23:48.249: INFO: Waiting for Pod statefulset-9688/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:23:58.260: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:23:58.260: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:23:58.260: INFO: Waiting for Pod statefulset-9688/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:24:08.259: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:24:08.259: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:24:18.260: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:24:18.260: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:24:28.263: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  6 14:24:38.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9688 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  6 14:24:38.774: INFO: stderr: "I0206 14:24:38.487908    3045 log.go:172] (0xc0001188f0) (0xc000956640) Create stream\nI0206 14:24:38.488213    3045 log.go:172] (0xc0001188f0) (0xc000956640) Stream added, broadcasting: 1\nI0206 14:24:38.493452    3045 log.go:172] (0xc0001188f0) Reply frame received for 1\nI0206 14:24:38.493568    3045 log.go:172] (0xc0001188f0) (0xc00075c000) Create stream\nI0206 14:24:38.493588    3045 log.go:172] (0xc0001188f0) (0xc00075c000) Stream added, broadcasting: 3\nI0206 14:24:38.495334    3045 log.go:172] (0xc0001188f0) Reply frame received for 3\nI0206 14:24:38.495368    3045 log.go:172] (0xc0001188f0) (0xc000682320) Create stream\nI0206 14:24:38.495390    3045 log.go:172] (0xc0001188f0) (0xc000682320) Stream added, broadcasting: 5\nI0206 14:24:38.497108    3045 log.go:172] (0xc0001188f0) Reply frame received for 5\nI0206 14:24:38.664542    3045 log.go:172] (0xc0001188f0) Data frame received for 5\nI0206 14:24:38.664589    3045 log.go:172] (0xc000682320) (5) Data frame handling\nI0206 14:24:38.664600    3045 log.go:172] (0xc000682320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0206 14:24:38.701224    3045 log.go:172] (0xc0001188f0) Data frame received for 3\nI0206 14:24:38.701245    3045 log.go:172] (0xc00075c000) (3) Data frame handling\nI0206 14:24:38.701263    3045 log.go:172] (0xc00075c000) (3) Data frame sent\nI0206 14:24:38.767358    3045 log.go:172] (0xc0001188f0) (0xc00075c000) Stream removed, broadcasting: 3\nI0206 14:24:38.767551    3045 log.go:172] (0xc0001188f0) Data frame received for 1\nI0206 14:24:38.767572    3045 log.go:172] (0xc000956640) (1) Data frame handling\nI0206 14:24:38.767595    3045 log.go:172] (0xc000956640) (1) Data frame sent\nI0206 14:24:38.767625    3045 log.go:172] (0xc0001188f0) (0xc000956640) Stream removed, broadcasting: 1\nI0206 14:24:38.767975    3045 log.go:172] (0xc0001188f0) (0xc000682320) Stream removed, broadcasting: 5\nI0206 14:24:38.768133    3045 log.go:172] (0xc0001188f0) Go away received\nI0206 14:24:38.768809    3045 log.go:172] (0xc0001188f0) (0xc000956640) Stream removed, broadcasting: 1\nI0206 14:24:38.768847    3045 log.go:172] (0xc0001188f0) (0xc00075c000) Stream removed, broadcasting: 3\nI0206 14:24:38.768877    3045 log.go:172] (0xc0001188f0) (0xc000682320) Stream removed, broadcasting: 5\n"
Feb  6 14:24:38.774: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  6 14:24:38.775: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  6 14:24:48.833: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  6 14:24:58.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9688 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  6 14:24:59.281: INFO: stderr: "I0206 14:24:59.102559    3067 log.go:172] (0xc000a46580) (0xc00033e6e0) Create stream\nI0206 14:24:59.102723    3067 log.go:172] (0xc000a46580) (0xc00033e6e0) Stream added, broadcasting: 1\nI0206 14:24:59.111387    3067 log.go:172] (0xc000a46580) Reply frame received for 1\nI0206 14:24:59.111442    3067 log.go:172] (0xc000a46580) (0xc00057a280) Create stream\nI0206 14:24:59.111457    3067 log.go:172] (0xc000a46580) (0xc00057a280) Stream added, broadcasting: 3\nI0206 14:24:59.113321    3067 log.go:172] (0xc000a46580) Reply frame received for 3\nI0206 14:24:59.113394    3067 log.go:172] (0xc000a46580) (0xc00033e000) Create stream\nI0206 14:24:59.113417    3067 log.go:172] (0xc000a46580) (0xc00033e000) Stream added, broadcasting: 5\nI0206 14:24:59.115316    3067 log.go:172] (0xc000a46580) Reply frame received for 5\nI0206 14:24:59.190774    3067 log.go:172] (0xc000a46580) Data frame received for 3\nI0206 14:24:59.190847    3067 log.go:172] (0xc00057a280) (3) Data frame handling\nI0206 14:24:59.190874    3067 log.go:172] (0xc00057a280) (3) Data frame sent\nI0206 14:24:59.190939    3067 log.go:172] (0xc000a46580) Data frame received for 5\nI0206 14:24:59.190965    3067 log.go:172] (0xc00033e000) (5) Data frame handling\nI0206 14:24:59.190982    3067 log.go:172] (0xc00033e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0206 14:24:59.270149    3067 log.go:172] (0xc000a46580) Data frame received for 1\nI0206 14:24:59.270283    3067 log.go:172] (0xc00033e6e0) (1) Data frame handling\nI0206 14:24:59.270335    3067 log.go:172] (0xc00033e6e0) (1) Data frame sent\nI0206 14:24:59.272513    3067 log.go:172] (0xc000a46580) (0xc00057a280) Stream removed, broadcasting: 3\nI0206 14:24:59.272621    3067 log.go:172] (0xc000a46580) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0206 14:24:59.272929    3067 log.go:172] (0xc000a46580) (0xc00033e000) Stream removed, broadcasting: 5\nI0206 14:24:59.273140    3067 log.go:172] (0xc000a46580) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0206 14:24:59.273151    3067 log.go:172] (0xc000a46580) (0xc00057a280) Stream removed, broadcasting: 3\nI0206 14:24:59.273159    3067 log.go:172] (0xc000a46580) (0xc00033e000) Stream removed, broadcasting: 5\nI0206 14:24:59.273510    3067 log.go:172] (0xc000a46580) Go away received\n"
Feb  6 14:24:59.281: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  6 14:24:59.281: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  6 14:25:09.307: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:25:09.307: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 14:25:09.307: INFO: Waiting for Pod statefulset-9688/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 14:25:19.606: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:25:19.606: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 14:25:29.321: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
Feb  6 14:25:29.321: INFO: Waiting for Pod statefulset-9688/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  6 14:25:39.319: INFO: Waiting for StatefulSet statefulset-9688/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  6 14:25:49.323: INFO: Deleting all statefulset in ns statefulset-9688
Feb  6 14:25:49.328: INFO: Scaling statefulset ss2 to 0
Feb  6 14:26:19.390: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 14:26:19.396: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:26:19.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9688" for this suite.
Feb  6 14:26:27.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:26:27.809: INFO: namespace statefulset-9688 deletion completed in 8.370469742s

• [SLOW TEST:220.919 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:26:27.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-da98b861-0bf3-4e67-b47e-1be0d1715e76
STEP: Creating a pod to test consume configMaps
Feb  6 14:26:28.125: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a" in namespace "projected-8922" to be "success or failure"
Feb  6 14:26:28.152: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.38407ms
Feb  6 14:26:30.164: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038839033s
Feb  6 14:26:32.177: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051426401s
Feb  6 14:26:34.191: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065439996s
Feb  6 14:26:36.199: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074094504s
Feb  6 14:26:38.211: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086129137s
STEP: Saw pod success
Feb  6 14:26:38.211: INFO: Pod "pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a" satisfied condition "success or failure"
Feb  6 14:26:38.216: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 14:26:38.449: INFO: Waiting for pod pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a to disappear
Feb  6 14:26:38.453: INFO: Pod pod-projected-configmaps-dfaa8756-97f0-4e95-9ea5-2143071e5f4a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:26:38.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8922" for this suite.
Feb  6 14:26:44.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:26:44.587: INFO: namespace projected-8922 deletion completed in 6.127117897s

• [SLOW TEST:16.776 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:26:44.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  6 14:26:44.693: INFO: Waiting up to 5m0s for pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646" in namespace "emptydir-8029" to be "success or failure"
Feb  6 14:26:44.713: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Pending", Reason="", readiness=false. Elapsed: 19.890891ms
Feb  6 14:26:46.728: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034472755s
Feb  6 14:26:48.759: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066099943s
Feb  6 14:26:50.767: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073696347s
Feb  6 14:26:52.774: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08030443s
Feb  6 14:26:54.784: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090676861s
STEP: Saw pod success
Feb  6 14:26:54.784: INFO: Pod "pod-db4ff312-1e35-4972-a053-796ee7ddb646" satisfied condition "success or failure"
Feb  6 14:26:54.789: INFO: Trying to get logs from node iruya-node pod pod-db4ff312-1e35-4972-a053-796ee7ddb646 container test-container: 
STEP: delete the pod
Feb  6 14:26:54.844: INFO: Waiting for pod pod-db4ff312-1e35-4972-a053-796ee7ddb646 to disappear
Feb  6 14:26:54.857: INFO: Pod pod-db4ff312-1e35-4972-a053-796ee7ddb646 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:26:54.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8029" for this suite.
Feb  6 14:27:00.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:27:01.067: INFO: namespace emptydir-8029 deletion completed in 6.197106527s

• [SLOW TEST:16.479 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:27:01.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:27:01.147: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321" in namespace "projected-349" to be "success or failure"
Feb  6 14:27:01.195: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321": Phase="Pending", Reason="", readiness=false. Elapsed: 48.327424ms
Feb  6 14:27:03.204: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057444794s
Feb  6 14:27:05.216: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068670715s
Feb  6 14:27:07.240: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093488736s
Feb  6 14:27:09.258: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110871177s
STEP: Saw pod success
Feb  6 14:27:09.258: INFO: Pod "downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321" satisfied condition "success or failure"
Feb  6 14:27:09.263: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321 container client-container: 
STEP: delete the pod
Feb  6 14:27:09.340: INFO: Waiting for pod downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321 to disappear
Feb  6 14:27:09.354: INFO: Pod downwardapi-volume-1c318e98-2468-4669-a034-55c4f4242321 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:27:09.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-349" for this suite.
Feb  6 14:27:15.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:27:15.547: INFO: namespace projected-349 deletion completed in 6.151691461s

• [SLOW TEST:14.478 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:27:15.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-3c07de31-6cab-4dd4-a403-65b4c28c96e6
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:27:27.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6111" for this suite.
Feb  6 14:27:51.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:27:52.045: INFO: namespace configmap-6111 deletion completed in 24.211553252s

• [SLOW TEST:36.498 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:27:52.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:28:02.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7274" for this suite.
Feb  6 14:28:48.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:28:48.437: INFO: namespace kubelet-test-7274 deletion completed in 46.203208732s

• [SLOW TEST:56.392 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:28:48.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-dde62c5f-29ed-406c-82b7-14b1b2398e1f
STEP: Creating a pod to test consume secrets
Feb  6 14:28:48.606: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea" in namespace "projected-6691" to be "success or failure"
Feb  6 14:28:48.683: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Pending", Reason="", readiness=false. Elapsed: 77.507502ms
Feb  6 14:28:50.689: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083547714s
Feb  6 14:28:52.747: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14170702s
Feb  6 14:28:54.760: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153989905s
Feb  6 14:28:56.766: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160072288s
Feb  6 14:28:58.791: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185762756s
STEP: Saw pod success
Feb  6 14:28:58.792: INFO: Pod "pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea" satisfied condition "success or failure"
Feb  6 14:28:58.796: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea container projected-secret-volume-test: 
STEP: delete the pod
Feb  6 14:28:58.884: INFO: Waiting for pod pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea to disappear
Feb  6 14:28:58.965: INFO: Pod pod-projected-secrets-1bd82c92-0178-4d40-86f1-4053b13258ea no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:28:58.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6691" for this suite.
Feb  6 14:29:05.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:29:05.097: INFO: namespace projected-6691 deletion completed in 6.126142768s

• [SLOW TEST:16.659 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:29:05.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1967
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1967 to expose endpoints map[]
Feb  6 14:29:05.294: INFO: Get endpoints failed (10.983308ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  6 14:29:06.302: INFO: successfully validated that service multi-endpoint-test in namespace services-1967 exposes endpoints map[] (1.019004547s elapsed)
STEP: Creating pod pod1 in namespace services-1967
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1967 to expose endpoints map[pod1:[100]]
Feb  6 14:29:10.507: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.177055146s elapsed, will retry)
Feb  6 14:29:13.554: INFO: successfully validated that service multi-endpoint-test in namespace services-1967 exposes endpoints map[pod1:[100]] (7.223429421s elapsed)
STEP: Creating pod pod2 in namespace services-1967
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1967 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  6 14:29:18.395: INFO: Unexpected endpoints: found map[94f7cf42-7f74-4286-9417-303db8a7f09d:[100]], expected map[pod1:[100] pod2:[101]] (4.833643363s elapsed, will retry)
Feb  6 14:29:21.446: INFO: successfully validated that service multi-endpoint-test in namespace services-1967 exposes endpoints map[pod1:[100] pod2:[101]] (7.884226567s elapsed)
STEP: Deleting pod pod1 in namespace services-1967
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1967 to expose endpoints map[pod2:[101]]
Feb  6 14:29:21.504: INFO: successfully validated that service multi-endpoint-test in namespace services-1967 exposes endpoints map[pod2:[101]] (35.712952ms elapsed)
STEP: Deleting pod pod2 in namespace services-1967
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1967 to expose endpoints map[]
Feb  6 14:29:21.579: INFO: successfully validated that service multi-endpoint-test in namespace services-1967 exposes endpoints map[] (28.514419ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:29:21.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1967" for this suite.
Feb  6 14:29:43.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:29:43.912: INFO: namespace services-1967 deletion completed in 22.197484472s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:38.814 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:29:43.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:29:43.994: INFO: Creating deployment "test-recreate-deployment"
Feb  6 14:29:44.001: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  6 14:29:44.047: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  6 14:29:46.062: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  6 14:29:46.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:29:48.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:29:50.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716596184, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  6 14:29:52.076: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  6 14:29:52.085: INFO: Updating deployment test-recreate-deployment
Feb  6 14:29:52.085: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  6 14:29:52.569: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-3565,SelfLink:/apis/apps/v1/namespaces/deployment-3565/deployments/test-recreate-deployment,UID:a497ab15-e9fb-4530-8f41-ef59f7820644,ResourceVersion:23329087,Generation:2,CreationTimestamp:2020-02-06 14:29:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-06 14:29:52 +0000 UTC 2020-02-06 14:29:52 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-06 14:29:52 +0000 UTC 2020-02-06 14:29:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  6 14:29:52.585: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-3565,SelfLink:/apis/apps/v1/namespaces/deployment-3565/replicasets/test-recreate-deployment-5c8c9cc69d,UID:5f0efff2-f429-4c35-8f99-670550ff8730,ResourceVersion:23329084,Generation:1,CreationTimestamp:2020-02-06 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a497ab15-e9fb-4530-8f41-ef59f7820644 0xc001b90327 0xc001b90328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 14:29:52.585: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  6 14:29:52.585: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-3565,SelfLink:/apis/apps/v1/namespaces/deployment-3565/replicasets/test-recreate-deployment-6df85df6b9,UID:d3d4c765-63f8-4599-a66a-134435f848c0,ResourceVersion:23329075,Generation:2,CreationTimestamp:2020-02-06 14:29:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment a497ab15-e9fb-4530-8f41-ef59f7820644 0xc001b903f7 0xc001b903f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 14:29:52.595: INFO: Pod "test-recreate-deployment-5c8c9cc69d-rzsrv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-rzsrv,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-3565,SelfLink:/api/v1/namespaces/deployment-3565/pods/test-recreate-deployment-5c8c9cc69d-rzsrv,UID:c651517b-13cc-4b87-9e72-89e66923b592,ResourceVersion:23329089,Generation:0,CreationTimestamp:2020-02-06 14:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 5f0efff2-f429-4c35-8f99-670550ff8730 0xc001b90d37 0xc001b90d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hw72c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hw72c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-hw72c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b90db0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b90dd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:29:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:29:52 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:29:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 14:29:52 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:29:52.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3565" for this suite.
Feb  6 14:29:58.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:29:58.832: INFO: namespace deployment-3565 deletion completed in 6.228555555s

• [SLOW TEST:14.921 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:29:58.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  6 14:29:59.067: INFO: Number of nodes with available pods: 0
Feb  6 14:29:59.067: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:00.969: INFO: Number of nodes with available pods: 0
Feb  6 14:30:00.969: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:01.763: INFO: Number of nodes with available pods: 0
Feb  6 14:30:01.763: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:02.080: INFO: Number of nodes with available pods: 0
Feb  6 14:30:02.080: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:03.103: INFO: Number of nodes with available pods: 0
Feb  6 14:30:03.103: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:04.092: INFO: Number of nodes with available pods: 0
Feb  6 14:30:04.092: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:05.856: INFO: Number of nodes with available pods: 0
Feb  6 14:30:05.856: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:06.326: INFO: Number of nodes with available pods: 0
Feb  6 14:30:06.326: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:07.699: INFO: Number of nodes with available pods: 0
Feb  6 14:30:07.699: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:08.123: INFO: Number of nodes with available pods: 0
Feb  6 14:30:08.123: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:09.081: INFO: Number of nodes with available pods: 0
Feb  6 14:30:09.081: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:10.125: INFO: Number of nodes with available pods: 0
Feb  6 14:30:10.126: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:11.083: INFO: Number of nodes with available pods: 1
Feb  6 14:30:11.083: INFO: Node iruya-node is running more than one daemon pod
Feb  6 14:30:12.077: INFO: Number of nodes with available pods: 2
Feb  6 14:30:12.077: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb  6 14:30:12.194: INFO: Number of nodes with available pods: 1
Feb  6 14:30:12.194: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:13.609: INFO: Number of nodes with available pods: 1
Feb  6 14:30:13.609: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:14.218: INFO: Number of nodes with available pods: 1
Feb  6 14:30:14.218: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:15.785: INFO: Number of nodes with available pods: 1
Feb  6 14:30:15.785: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:16.212: INFO: Number of nodes with available pods: 1
Feb  6 14:30:16.212: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:17.216: INFO: Number of nodes with available pods: 1
Feb  6 14:30:17.216: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:18.507: INFO: Number of nodes with available pods: 1
Feb  6 14:30:18.507: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:19.696: INFO: Number of nodes with available pods: 1
Feb  6 14:30:19.696: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:20.206: INFO: Number of nodes with available pods: 1
Feb  6 14:30:20.206: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:21.211: INFO: Number of nodes with available pods: 1
Feb  6 14:30:21.211: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod
Feb  6 14:30:22.210: INFO: Number of nodes with available pods: 2
Feb  6 14:30:22.210: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7879, will wait for the garbage collector to delete the pods
Feb  6 14:30:22.281: INFO: Deleting DaemonSet.extensions daemon-set took: 10.783961ms
Feb  6 14:30:22.581: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.451696ms
Feb  6 14:30:37.987: INFO: Number of nodes with available pods: 0
Feb  6 14:30:37.987: INFO: Number of running nodes: 0, number of available pods: 0
Feb  6 14:30:37.992: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7879/daemonsets","resourceVersion":"23329228"},"items":null}

Feb  6 14:30:37.997: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7879/pods","resourceVersion":"23329228"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:30:38.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7879" for this suite.
Feb  6 14:30:46.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:30:46.212: INFO: namespace daemonsets-7879 deletion completed in 8.150896622s

• [SLOW TEST:47.379 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:30:46.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  6 14:30:56.418: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-54008c28-cd97-4f14-bbf1-5c7451baa563,GenerateName:,Namespace:events-1667,SelfLink:/api/v1/namespaces/events-1667/pods/send-events-54008c28-cd97-4f14-bbf1-5c7451baa563,UID:d2562b2f-c741-415a-83f5-d6fded75c5a7,ResourceVersion:23329288,Generation:0,CreationTimestamp:2020-02-06 14:30:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 316899466,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9b5w4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9b5w4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9b5w4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001b97520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001b97540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:30:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:30:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:30:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 14:30:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-06 14:30:46 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-06 14:30:53 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://2183a10596ece43c21c92831b72b3fa7bc80e869ec340481746143060f77b52c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  6 14:30:58.431: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  6 14:31:00.441: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:31:00.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1667" for this suite.
Feb  6 14:31:52.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:31:52.658: INFO: namespace events-1667 deletion completed in 52.16079275s

• [SLOW TEST:66.446 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:31:52.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:31:52.767: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.189071ms)
Feb  6 14:31:52.800: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.686547ms)
Feb  6 14:31:52.807: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.193444ms)
Feb  6 14:31:52.815: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.071921ms)
Feb  6 14:31:52.822: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.230356ms)
Feb  6 14:31:52.825: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.577772ms)
Feb  6 14:31:52.831: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.15974ms)
Feb  6 14:31:52.836: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.61988ms)
Feb  6 14:31:52.843: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.061601ms)
Feb  6 14:31:52.854: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.855824ms)
Feb  6 14:31:52.865: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.266803ms)
Feb  6 14:31:52.876: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.927383ms)
Feb  6 14:31:52.883: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.252542ms)
Feb  6 14:31:52.889: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.383553ms)
Feb  6 14:31:52.894: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.67214ms)
Feb  6 14:31:52.899: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.588035ms)
Feb  6 14:31:52.902: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.709364ms)
Feb  6 14:31:52.906: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.97111ms)
Feb  6 14:31:52.910: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.336092ms)
Feb  6 14:31:52.913: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.511578ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:31:52.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6412" for this suite.
Feb  6 14:31:58.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:31:59.064: INFO: namespace proxy-6412 deletion completed in 6.147884818s

• [SLOW TEST:6.406 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:31:59.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:32:04.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1884" for this suite.
Feb  6 14:32:10.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:32:10.831: INFO: namespace watch-1884 deletion completed in 6.300166185s

• [SLOW TEST:11.765 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:32:10.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2732e745-1df6-4054-a66f-763e430b2e23
STEP: Creating a pod to test consume configMaps
Feb  6 14:32:10.987: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a" in namespace "projected-9650" to be "success or failure"
Feb  6 14:32:11.038: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.078156ms
Feb  6 14:32:13.073: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086075817s
Feb  6 14:32:15.082: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095064851s
Feb  6 14:32:17.089: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.101927598s
Feb  6 14:32:19.094: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107324062s
Feb  6 14:32:21.099: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112257016s
STEP: Saw pod success
Feb  6 14:32:21.099: INFO: Pod "pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a" satisfied condition "success or failure"
Feb  6 14:32:21.103: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 14:32:21.197: INFO: Waiting for pod pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a to disappear
Feb  6 14:32:21.213: INFO: Pod pod-projected-configmaps-ca93c4fc-723b-49e2-be8c-ac69d4179a8a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:32:21.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9650" for this suite.
Feb  6 14:32:27.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:32:27.385: INFO: namespace projected-9650 deletion completed in 6.166537277s

• [SLOW TEST:16.553 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:32:27.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-635302cf-87e1-41a6-8c94-b593bb0d3ca5
STEP: Creating a pod to test consume configMaps
Feb  6 14:32:27.615: INFO: Waiting up to 5m0s for pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3" in namespace "configmap-2552" to be "success or failure"
Feb  6 14:32:27.628: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.019918ms
Feb  6 14:32:29.636: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021229454s
Feb  6 14:32:31.643: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027830397s
Feb  6 14:32:33.652: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037516259s
Feb  6 14:32:35.662: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047038996s
Feb  6 14:32:37.667: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052593388s
STEP: Saw pod success
Feb  6 14:32:37.667: INFO: Pod "pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3" satisfied condition "success or failure"
Feb  6 14:32:37.670: INFO: Trying to get logs from node iruya-node pod pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3 container configmap-volume-test: 
STEP: delete the pod
Feb  6 14:32:37.818: INFO: Waiting for pod pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3 to disappear
Feb  6 14:32:37.888: INFO: Pod pod-configmaps-569ad926-1a68-4ac2-9ba8-92aa055535f3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:32:37.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2552" for this suite.
Feb  6 14:32:43.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:32:44.031: INFO: namespace configmap-2552 deletion completed in 6.138025076s

• [SLOW TEST:16.646 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:32:44.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-80942760-4b08-49a3-be74-3aa91ae81792
STEP: Creating a pod to test consume configMaps
Feb  6 14:32:44.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2" in namespace "configmap-5936" to be "success or failure"
Feb  6 14:32:44.171: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.271753ms
Feb  6 14:32:46.181: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031534077s
Feb  6 14:32:48.186: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036542366s
Feb  6 14:32:50.197: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046967911s
Feb  6 14:32:52.205: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054750701s
Feb  6 14:32:54.211: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06096336s
STEP: Saw pod success
Feb  6 14:32:54.211: INFO: Pod "pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2" satisfied condition "success or failure"
Feb  6 14:32:54.214: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2 container configmap-volume-test: 
STEP: delete the pod
Feb  6 14:32:54.263: INFO: Waiting for pod pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2 to disappear
Feb  6 14:32:54.274: INFO: Pod pod-configmaps-0bd3e4b6-b50b-4924-814d-d7b7f0ae9df2 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:32:54.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5936" for this suite.
Feb  6 14:33:00.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:33:00.469: INFO: namespace configmap-5936 deletion completed in 6.191289007s

• [SLOW TEST:16.437 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:33:00.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:33:00.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20" in namespace "projected-4157" to be "success or failure"
Feb  6 14:33:00.685: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Pending", Reason="", readiness=false. Elapsed: 20.060319ms
Feb  6 14:33:02.695: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030690043s
Feb  6 14:33:04.709: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044695073s
Feb  6 14:33:06.726: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061621787s
Feb  6 14:33:08.733: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068437355s
Feb  6 14:33:10.740: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075269452s
STEP: Saw pod success
Feb  6 14:33:10.740: INFO: Pod "downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20" satisfied condition "success or failure"
Feb  6 14:33:10.745: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20 container client-container: 
STEP: delete the pod
Feb  6 14:33:10.815: INFO: Waiting for pod downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20 to disappear
Feb  6 14:33:10.980: INFO: Pod downwardapi-volume-df4259e1-70ed-4a6e-bcd9-50063bfc2c20 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:33:10.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4157" for this suite.
Feb  6 14:33:17.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:33:17.391: INFO: namespace projected-4157 deletion completed in 6.402118553s

• [SLOW TEST:16.921 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:33:17.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3215/configmap-test-57f73a5e-e3a8-4e7e-9e51-4e95a68b5948
STEP: Creating a pod to test consume configMaps
Feb  6 14:33:17.468: INFO: Waiting up to 5m0s for pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3" in namespace "configmap-3215" to be "success or failure"
Feb  6 14:33:17.621: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 153.259967ms
Feb  6 14:33:19.633: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16489524s
Feb  6 14:33:21.640: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172150283s
Feb  6 14:33:23.652: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184595042s
Feb  6 14:33:25.663: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.195205169s
Feb  6 14:33:27.672: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.204031777s
STEP: Saw pod success
Feb  6 14:33:27.672: INFO: Pod "pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3" satisfied condition "success or failure"
Feb  6 14:33:27.677: INFO: Trying to get logs from node iruya-node pod pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3 container env-test: 
STEP: delete the pod
Feb  6 14:33:27.788: INFO: Waiting for pod pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3 to disappear
Feb  6 14:33:27.796: INFO: Pod pod-configmaps-95b12e96-c142-42b7-b2f6-d80bdd00e4e3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:33:27.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3215" for this suite.
Feb  6 14:33:33.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:33:33.966: INFO: namespace configmap-3215 deletion completed in 6.161998782s

• [SLOW TEST:16.575 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:33:33.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8620/configmap-test-6887bf58-4477-4363-8aa2-2e7510d14021
STEP: Creating a pod to test consume configMaps
Feb  6 14:33:34.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2" in namespace "configmap-8620" to be "success or failure"
Feb  6 14:33:34.099: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.495194ms
Feb  6 14:33:36.106: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017150365s
Feb  6 14:33:38.136: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046811888s
Feb  6 14:33:40.144: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0545572s
Feb  6 14:33:42.151: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062120233s
Feb  6 14:33:44.158: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068515629s
STEP: Saw pod success
Feb  6 14:33:44.158: INFO: Pod "pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2" satisfied condition "success or failure"
Feb  6 14:33:44.163: INFO: Trying to get logs from node iruya-node pod pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2 container env-test: 
STEP: delete the pod
Feb  6 14:33:44.218: INFO: Waiting for pod pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2 to disappear
Feb  6 14:33:44.242: INFO: Pod pod-configmaps-546fb103-a884-4107-8349-e490018ff3f2 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:33:44.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8620" for this suite.
Feb  6 14:33:50.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:33:50.502: INFO: namespace configmap-8620 deletion completed in 6.253232622s

• [SLOW TEST:16.534 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:33:50.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 14:33:50.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3123'
Feb  6 14:33:52.869: INFO: stderr: ""
Feb  6 14:33:52.870: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  6 14:33:52.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3123'
Feb  6 14:33:56.789: INFO: stderr: ""
Feb  6 14:33:56.789: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:33:56.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3123" for this suite.
Feb  6 14:34:02.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:34:02.933: INFO: namespace kubectl-3123 deletion completed in 6.132715763s

• [SLOW TEST:12.431 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:34:02.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  6 14:34:03.022: INFO: Waiting up to 5m0s for pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603" in namespace "emptydir-3980" to be "success or failure"
Feb  6 14:34:03.092: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Pending", Reason="", readiness=false. Elapsed: 70.447696ms
Feb  6 14:34:05.098: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075877336s
Feb  6 14:34:07.109: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087118013s
Feb  6 14:34:09.123: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100821454s
Feb  6 14:34:11.172: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14989411s
Feb  6 14:34:13.230: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.208166807s
STEP: Saw pod success
Feb  6 14:34:13.230: INFO: Pod "pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603" satisfied condition "success or failure"
Feb  6 14:34:13.235: INFO: Trying to get logs from node iruya-node pod pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603 container test-container: 
STEP: delete the pod
Feb  6 14:34:13.320: INFO: Waiting for pod pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603 to disappear
Feb  6 14:34:13.326: INFO: Pod pod-c9b62336-4105-4f4f-96b7-41c4d7e5f603 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:34:13.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3980" for this suite.
Feb  6 14:34:19.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:34:19.615: INFO: namespace emptydir-3980 deletion completed in 6.162574414s

• [SLOW TEST:16.681 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:34:19.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:34:26.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9450" for this suite.
Feb  6 14:34:32.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:34:32.394: INFO: namespace namespaces-9450 deletion completed in 6.205066735s
STEP: Destroying namespace "nsdeletetest-7430" for this suite.
Feb  6 14:34:32.397: INFO: Namespace nsdeletetest-7430 was already deleted
STEP: Destroying namespace "nsdeletetest-3131" for this suite.
Feb  6 14:34:38.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:34:38.563: INFO: namespace nsdeletetest-3131 deletion completed in 6.166568878s

• [SLOW TEST:18.948 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:34:38.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  6 14:34:38.688: INFO: Waiting up to 5m0s for pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d" in namespace "containers-6458" to be "success or failure"
Feb  6 14:34:38.706: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.169231ms
Feb  6 14:34:40.715: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027354299s
Feb  6 14:34:42.726: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037725715s
Feb  6 14:34:44.733: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044868876s
Feb  6 14:34:46.745: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056681198s
Feb  6 14:34:48.752: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063535446s
STEP: Saw pod success
Feb  6 14:34:48.752: INFO: Pod "client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d" satisfied condition "success or failure"
Feb  6 14:34:48.755: INFO: Trying to get logs from node iruya-node pod client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d container test-container: 
STEP: delete the pod
Feb  6 14:34:48.805: INFO: Waiting for pod client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d to disappear
Feb  6 14:34:48.810: INFO: Pod client-containers-0e92ce1c-019d-4896-abf9-5b2b0d35008d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:34:48.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6458" for this suite.
Feb  6 14:34:54.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:34:55.000: INFO: namespace containers-6458 deletion completed in 6.18354365s

• [SLOW TEST:16.435 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:34:55.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-d86d01f8-1f00-4711-96dc-3f5e4555a11f
STEP: Creating configMap with name cm-test-opt-upd-d024b439-2b71-472b-bff0-0434ca4b34e0
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-d86d01f8-1f00-4711-96dc-3f5e4555a11f
STEP: Updating configmap cm-test-opt-upd-d024b439-2b71-472b-bff0-0434ca4b34e0
STEP: Creating configMap with name cm-test-opt-create-1dbde7e6-f757-4b6e-a85b-2eb75d523baa
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:35:11.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8561" for this suite.
Feb  6 14:35:33.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:35:33.596: INFO: namespace configmap-8561 deletion completed in 22.136475394s

• [SLOW TEST:38.596 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:35:33.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  6 14:35:33.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2024'
Feb  6 14:35:34.186: INFO: stderr: ""
Feb  6 14:35:34.186: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  6 14:35:35.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:35.198: INFO: Found 0 / 1
Feb  6 14:35:36.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:36.198: INFO: Found 0 / 1
Feb  6 14:35:37.200: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:37.200: INFO: Found 0 / 1
Feb  6 14:35:38.318: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:38.319: INFO: Found 0 / 1
Feb  6 14:35:39.195: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:39.195: INFO: Found 0 / 1
Feb  6 14:35:40.197: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:40.198: INFO: Found 0 / 1
Feb  6 14:35:41.199: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:41.199: INFO: Found 0 / 1
Feb  6 14:35:42.201: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:42.202: INFO: Found 0 / 1
Feb  6 14:35:43.197: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:43.197: INFO: Found 0 / 1
Feb  6 14:35:44.201: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:44.201: INFO: Found 1 / 1
Feb  6 14:35:44.201: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  6 14:35:44.209: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 14:35:44.209: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  6 14:35:44.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024'
Feb  6 14:35:44.345: INFO: stderr: ""
Feb  6 14:35:44.345: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 14:35:42.210 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 14:35:42.210 # Server started, Redis version 3.2.12\n1:M 06 Feb 14:35:42.210 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 14:35:42.210 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  6 14:35:44.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024 --tail=1'
Feb  6 14:35:44.651: INFO: stderr: ""
Feb  6 14:35:44.651: INFO: stdout: "1:M 06 Feb 14:35:42.210 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  6 14:35:44.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024 --limit-bytes=1'
Feb  6 14:35:44.848: INFO: stderr: ""
Feb  6 14:35:44.848: INFO: stdout: " "
STEP: exposing timestamps
Feb  6 14:35:44.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024 --tail=1 --timestamps'
Feb  6 14:35:44.999: INFO: stderr: ""
Feb  6 14:35:44.999: INFO: stdout: "2020-02-06T14:35:42.212773564Z 1:M 06 Feb 14:35:42.210 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  6 14:35:47.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024 --since=1s'
Feb  6 14:35:47.675: INFO: stderr: ""
Feb  6 14:35:47.675: INFO: stdout: ""
Feb  6 14:35:47.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6kjfw redis-master --namespace=kubectl-2024 --since=24h'
Feb  6 14:35:47.856: INFO: stderr: ""
Feb  6 14:35:47.856: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 14:35:42.210 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 14:35:42.210 # Server started, Redis version 3.2.12\n1:M 06 Feb 14:35:42.210 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 14:35:42.210 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  6 14:35:47.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2024'
Feb  6 14:35:47.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  6 14:35:47.968: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  6 14:35:47.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2024'
Feb  6 14:35:48.092: INFO: stderr: "No resources found.\n"
Feb  6 14:35:48.092: INFO: stdout: ""
Feb  6 14:35:48.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2024 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  6 14:35:48.170: INFO: stderr: ""
Feb  6 14:35:48.170: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:35:48.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2024" for this suite.
Feb  6 14:36:10.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:36:10.407: INFO: namespace kubectl-2024 deletion completed in 22.175998649s

• [SLOW TEST:36.810 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:36:10.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  6 14:36:10.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  6 14:36:10.658: INFO: stderr: ""
Feb  6 14:36:10.658: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:36:10.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6859" for this suite.
Feb  6 14:36:16.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:36:16.908: INFO: namespace kubectl-6859 deletion completed in 6.239959173s

• [SLOW TEST:6.500 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:36:16.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:36:45.061: INFO: Container started at 2020-02-06 14:36:25 +0000 UTC, pod became ready at 2020-02-06 14:36:43 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:36:45.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7362" for this suite.
Feb  6 14:37:07.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:37:07.209: INFO: namespace container-probe-7362 deletion completed in 22.143939018s

• [SLOW TEST:50.301 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:37:07.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 14:37:07.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4087'
Feb  6 14:37:07.449: INFO: stderr: ""
Feb  6 14:37:07.449: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  6 14:37:17.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-4087 -o json'
Feb  6 14:37:17.690: INFO: stderr: ""
Feb  6 14:37:17.690: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-06T14:37:07Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-4087\",\n        \"resourceVersion\": \"23330295\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-4087/pods/e2e-test-nginx-pod\",\n        \"uid\": \"9c329908-a773-42c0-91d3-d902ea0cea4b\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-hvc8r\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-hvc8r\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-hvc8r\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T14:37:07Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T14:37:16Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T14:37:16Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-06T14:37:07Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://38929c8ab6b58f78df0d18416b168376c6de4984909fa6ae6f8fbc0f5709c9d1\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-06T14:37:15Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-06T14:37:07Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  6 14:37:17.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4087'
Feb  6 14:37:18.157: INFO: stderr: ""
Feb  6 14:37:18.157: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  6 14:37:18.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4087'
Feb  6 14:37:24.715: INFO: stderr: ""
Feb  6 14:37:24.715: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:37:24.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4087" for this suite.
Feb  6 14:37:30.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:37:30.978: INFO: namespace kubectl-4087 deletion completed in 6.235678738s

• [SLOW TEST:23.769 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:37:30.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  6 14:37:31.122: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330340,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 14:37:31.123: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330341,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  6 14:37:31.123: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330342,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  6 14:37:41.240: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330358,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 14:37:41.241: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330359,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  6 14:37:41.241: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5495,SelfLink:/api/v1/namespaces/watch-5495/configmaps/e2e-watch-test-label-changed,UID:94f0a977-ddc8-4ae1-8379-f2bfb9a54067,ResourceVersion:23330360,Generation:0,CreationTimestamp:2020-02-06 14:37:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:37:41.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5495" for this suite.
Feb  6 14:37:47.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:37:47.420: INFO: namespace watch-5495 deletion completed in 6.172477543s

• [SLOW TEST:16.442 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:37:47.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d428bb69-5a91-42dd-9da3-cb1d14c90bf0
STEP: Creating a pod to test consume secrets
Feb  6 14:37:47.505: INFO: Waiting up to 5m0s for pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5" in namespace "secrets-7535" to be "success or failure"
Feb  6 14:37:47.597: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Pending", Reason="", readiness=false. Elapsed: 92.25685ms
Feb  6 14:37:49.608: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103359672s
Feb  6 14:37:51.613: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108223055s
Feb  6 14:37:53.627: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121832202s
Feb  6 14:37:55.642: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136858678s
Feb  6 14:37:57.648: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143178632s
STEP: Saw pod success
Feb  6 14:37:57.648: INFO: Pod "pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5" satisfied condition "success or failure"
Feb  6 14:37:57.652: INFO: Trying to get logs from node iruya-node pod pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5 container secret-volume-test: 
STEP: delete the pod
Feb  6 14:37:57.725: INFO: Waiting for pod pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5 to disappear
Feb  6 14:37:57.730: INFO: Pod pod-secrets-630058c3-db5b-4434-b550-a5fe80fabfd5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:37:57.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7535" for this suite.
Feb  6 14:38:03.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:38:03.985: INFO: namespace secrets-7535 deletion completed in 6.244668996s

• [SLOW TEST:16.565 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:38:03.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-00efe42e-1eca-4442-b16b-080f7ab9ba60
STEP: Creating a pod to test consume secrets
Feb  6 14:38:04.144: INFO: Waiting up to 5m0s for pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511" in namespace "secrets-1570" to be "success or failure"
Feb  6 14:38:04.161: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Pending", Reason="", readiness=false. Elapsed: 17.227596ms
Feb  6 14:38:06.177: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032985036s
Feb  6 14:38:08.188: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044546169s
Feb  6 14:38:10.200: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056473262s
Feb  6 14:38:12.212: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068066393s
Feb  6 14:38:14.219: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075380973s
STEP: Saw pod success
Feb  6 14:38:14.219: INFO: Pod "pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511" satisfied condition "success or failure"
Feb  6 14:38:14.224: INFO: Trying to get logs from node iruya-node pod pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511 container secret-volume-test: 
STEP: delete the pod
Feb  6 14:38:14.675: INFO: Waiting for pod pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511 to disappear
Feb  6 14:38:14.681: INFO: Pod pod-secrets-d33bac99-10c1-4f4c-8181-59584ad09511 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:38:14.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1570" for this suite.
Feb  6 14:38:20.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:38:20.890: INFO: namespace secrets-1570 deletion completed in 6.199833045s

• [SLOW TEST:16.904 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:38:20.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-641ac6d4-63a6-49a7-bbac-f1112922d7f7
STEP: Creating a pod to test consume configMaps
Feb  6 14:38:20.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606" in namespace "projected-8000" to be "success or failure"
Feb  6 14:38:20.974: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Pending", Reason="", readiness=false. Elapsed: 9.863062ms
Feb  6 14:38:22.987: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02323561s
Feb  6 14:38:24.998: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033670106s
Feb  6 14:38:27.007: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043260669s
Feb  6 14:38:29.015: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050642729s
Feb  6 14:38:31.383: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.418422623s
STEP: Saw pod success
Feb  6 14:38:31.383: INFO: Pod "pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606" satisfied condition "success or failure"
Feb  6 14:38:31.389: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  6 14:38:31.484: INFO: Waiting for pod pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606 to disappear
Feb  6 14:38:31.578: INFO: Pod pod-projected-configmaps-041c6804-5871-4e3a-9498-189bd15a6606 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:38:31.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8000" for this suite.
Feb  6 14:38:37.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:38:37.763: INFO: namespace projected-8000 deletion completed in 6.174914164s

• [SLOW TEST:16.873 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:38:37.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:38:37.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55" in namespace "downward-api-7882" to be "success or failure"
Feb  6 14:38:38.047: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Pending", Reason="", readiness=false. Elapsed: 66.66933ms
Feb  6 14:38:40.054: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074285616s
Feb  6 14:38:42.062: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081948315s
Feb  6 14:38:44.070: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089708913s
Feb  6 14:38:46.078: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097817502s
Feb  6 14:38:48.084: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104150779s
STEP: Saw pod success
Feb  6 14:38:48.084: INFO: Pod "downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55" satisfied condition "success or failure"
Feb  6 14:38:48.087: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55 container client-container: 
STEP: delete the pod
Feb  6 14:38:48.210: INFO: Waiting for pod downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55 to disappear
Feb  6 14:38:48.213: INFO: Pod downwardapi-volume-acf292f2-3f69-46cf-b303-352082c67a55 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:38:48.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7882" for this suite.
Feb  6 14:38:54.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:38:54.375: INFO: namespace downward-api-7882 deletion completed in 6.156932656s

• [SLOW TEST:16.612 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:38:54.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  6 14:38:54.627: INFO: Waiting up to 5m0s for pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081" in namespace "emptydir-5891" to be "success or failure"
Feb  6 14:38:54.651: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Pending", Reason="", readiness=false. Elapsed: 24.607071ms
Feb  6 14:38:56.673: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046140414s
Feb  6 14:38:58.684: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057091408s
Feb  6 14:39:00.692: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064794161s
Feb  6 14:39:02.702: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074976459s
Feb  6 14:39:04.718: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090764209s
STEP: Saw pod success
Feb  6 14:39:04.718: INFO: Pod "pod-28b54e22-50d3-47d1-a45d-5d015b04a081" satisfied condition "success or failure"
Feb  6 14:39:04.729: INFO: Trying to get logs from node iruya-node pod pod-28b54e22-50d3-47d1-a45d-5d015b04a081 container test-container: 
STEP: delete the pod
Feb  6 14:39:04.802: INFO: Waiting for pod pod-28b54e22-50d3-47d1-a45d-5d015b04a081 to disappear
Feb  6 14:39:04.805: INFO: Pod pod-28b54e22-50d3-47d1-a45d-5d015b04a081 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:39:04.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5891" for this suite.
Feb  6 14:39:10.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:39:10.969: INFO: namespace emptydir-5891 deletion completed in 6.158922512s

• [SLOW TEST:16.594 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:39:10.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6035
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  6 14:39:11.120: INFO: Found 0 stateful pods, waiting for 3
Feb  6 14:39:21.230: INFO: Found 2 stateful pods, waiting for 3
Feb  6 14:39:31.126: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:39:31.126: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:39:31.126: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 14:39:41.124: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:39:41.124: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:39:41.124: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  6 14:39:41.153: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  6 14:39:52.133: INFO: Updating stateful set ss2
Feb  6 14:39:52.172: INFO: Waiting for Pod statefulset-6035/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:40:02.231: INFO: Waiting for Pod statefulset-6035/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  6 14:40:12.605: INFO: Found 2 stateful pods, waiting for 3
Feb  6 14:40:22.657: INFO: Found 2 stateful pods, waiting for 3
Feb  6 14:40:32.627: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:40:32.627: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:40:32.627: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  6 14:40:42.627: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:40:42.627: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  6 14:40:42.627: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  6 14:40:42.666: INFO: Updating stateful set ss2
Feb  6 14:40:42.693: INFO: Waiting for Pod statefulset-6035/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:40:52.704: INFO: Waiting for Pod statefulset-6035/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:41:03.038: INFO: Updating stateful set ss2
Feb  6 14:41:03.092: INFO: Waiting for StatefulSet statefulset-6035/ss2 to complete update
Feb  6 14:41:03.092: INFO: Waiting for Pod statefulset-6035/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:41:13.104: INFO: Waiting for StatefulSet statefulset-6035/ss2 to complete update
Feb  6 14:41:13.104: INFO: Waiting for Pod statefulset-6035/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  6 14:41:23.101: INFO: Waiting for StatefulSet statefulset-6035/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  6 14:41:33.149: INFO: Deleting all statefulset in ns statefulset-6035
Feb  6 14:41:33.154: INFO: Scaling statefulset ss2 to 0
Feb  6 14:42:13.190: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 14:42:13.200: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:42:13.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6035" for this suite.
Feb  6 14:42:21.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:42:21.453: INFO: namespace statefulset-6035 deletion completed in 8.168499106s

• [SLOW TEST:190.483 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:42:21.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:42:31.705: INFO: Waiting up to 5m0s for pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2" in namespace "pods-1379" to be "success or failure"
Feb  6 14:42:31.802: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 97.725692ms
Feb  6 14:42:33.813: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108041704s
Feb  6 14:42:35.829: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124207675s
Feb  6 14:42:37.840: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135085065s
Feb  6 14:42:39.849: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144193816s
Feb  6 14:42:41.862: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.157049245s
STEP: Saw pod success
Feb  6 14:42:41.862: INFO: Pod "client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2" satisfied condition "success or failure"
Feb  6 14:42:41.874: INFO: Trying to get logs from node iruya-node pod client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2 container env3cont: 
STEP: delete the pod
Feb  6 14:42:42.000: INFO: Waiting for pod client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2 to disappear
Feb  6 14:42:42.010: INFO: Pod client-envvars-c5b4c47a-d0e9-4c76-b767-52e4e6771ae2 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:42:42.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1379" for this suite.
Feb  6 14:43:26.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:43:26.162: INFO: namespace pods-1379 deletion completed in 44.140495539s

• [SLOW TEST:64.709 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:43:26.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-9574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9574 to expose endpoints map[]
Feb  6 14:43:26.328: INFO: Get endpoints failed (29.243383ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb  6 14:43:27.345: INFO: successfully validated that service endpoint-test2 in namespace services-9574 exposes endpoints map[] (1.046935365s elapsed)
STEP: Creating pod pod1 in namespace services-9574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9574 to expose endpoints map[pod1:[80]]
Feb  6 14:43:31.549: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.176700236s elapsed, will retry)
Feb  6 14:43:36.778: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.405522589s elapsed, will retry)
Feb  6 14:43:38.816: INFO: successfully validated that service endpoint-test2 in namespace services-9574 exposes endpoints map[pod1:[80]] (11.443784428s elapsed)
STEP: Creating pod pod2 in namespace services-9574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9574 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  6 14:43:43.329: INFO: Unexpected endpoints: found map[9c114095-37e7-459f-899e-a0b81d2f7857:[80]], expected map[pod1:[80] pod2:[80]] (4.491788827s elapsed, will retry)
Feb  6 14:43:47.476: INFO: successfully validated that service endpoint-test2 in namespace services-9574 exposes endpoints map[pod1:[80] pod2:[80]] (8.638876906s elapsed)
STEP: Deleting pod pod1 in namespace services-9574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9574 to expose endpoints map[pod2:[80]]
Feb  6 14:43:48.581: INFO: successfully validated that service endpoint-test2 in namespace services-9574 exposes endpoints map[pod2:[80]] (1.085468135s elapsed)
STEP: Deleting pod pod2 in namespace services-9574
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9574 to expose endpoints map[]
Feb  6 14:43:48.617: INFO: successfully validated that service endpoint-test2 in namespace services-9574 exposes endpoints map[] (15.456208ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:43:48.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9574" for this suite.
Feb  6 14:44:10.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:44:10.872: INFO: namespace services-9574 deletion completed in 22.155144148s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:44.710 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:44:10.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  6 14:44:21.558: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fb2fa742-65f1-4594-9442-0fa05045a469"
Feb  6 14:44:21.558: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fb2fa742-65f1-4594-9442-0fa05045a469" in namespace "pods-1852" to be "terminated due to deadline exceeded"
Feb  6 14:44:21.590: INFO: Pod "pod-update-activedeadlineseconds-fb2fa742-65f1-4594-9442-0fa05045a469": Phase="Running", Reason="", readiness=true. Elapsed: 31.414938ms
Feb  6 14:44:23.599: INFO: Pod "pod-update-activedeadlineseconds-fb2fa742-65f1-4594-9442-0fa05045a469": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.040782327s
Feb  6 14:44:23.599: INFO: Pod "pod-update-activedeadlineseconds-fb2fa742-65f1-4594-9442-0fa05045a469" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:44:23.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1852" for this suite.
Feb  6 14:44:29.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:44:29.775: INFO: namespace pods-1852 deletion completed in 6.165760516s

• [SLOW TEST:18.902 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:44:29.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3136, will wait for the garbage collector to delete the pods
Feb  6 14:44:42.130: INFO: Deleting Job.batch foo took: 14.168401ms
Feb  6 14:44:42.430: INFO: Terminating Job.batch foo pods took: 300.488833ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:45:26.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3136" for this suite.
Feb  6 14:45:32.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:45:32.982: INFO: namespace job-3136 deletion completed in 6.200875084s

• [SLOW TEST:63.204 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:45:32.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:45:33.110: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0" in namespace "downward-api-2844" to be "success or failure"
Feb  6 14:45:33.113: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.217418ms
Feb  6 14:45:35.123: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013482708s
Feb  6 14:45:37.131: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021132742s
Feb  6 14:45:39.469: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359499714s
Feb  6 14:45:41.479: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368947842s
Feb  6 14:45:43.487: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377219113s
Feb  6 14:45:46.289: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.178790046s
Feb  6 14:45:48.297: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.187040489s
STEP: Saw pod success
Feb  6 14:45:48.297: INFO: Pod "downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0" satisfied condition "success or failure"
Feb  6 14:45:48.302: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0 container client-container: 
STEP: delete the pod
Feb  6 14:45:48.392: INFO: Waiting for pod downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0 to disappear
Feb  6 14:45:48.517: INFO: Pod downwardapi-volume-2dc289c8-0b5e-40ba-b906-1838aac930c0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:45:48.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2844" for this suite.
Feb  6 14:45:54.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:45:54.664: INFO: namespace downward-api-2844 deletion completed in 6.137735923s

• [SLOW TEST:21.681 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:45:54.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-95385036-8331-4376-9cbf-afd75d3bed5c
STEP: Creating a pod to test consume configMaps
Feb  6 14:45:54.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde" in namespace "configmap-5743" to be "success or failure"
Feb  6 14:45:54.861: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Pending", Reason="", readiness=false. Elapsed: 12.705021ms
Feb  6 14:45:56.876: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027228758s
Feb  6 14:45:58.883: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03417544s
Feb  6 14:46:00.891: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042557481s
Feb  6 14:46:02.898: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049613153s
Feb  6 14:46:04.910: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061427458s
STEP: Saw pod success
Feb  6 14:46:04.910: INFO: Pod "pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde" satisfied condition "success or failure"
Feb  6 14:46:04.916: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde container configmap-volume-test: 
STEP: delete the pod
Feb  6 14:46:04.971: INFO: Waiting for pod pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde to disappear
Feb  6 14:46:05.024: INFO: Pod pod-configmaps-b2e53770-7c6b-47d6-b28b-04c8b39ebfde no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:46:05.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5743" for this suite.
Feb  6 14:46:11.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:46:11.183: INFO: namespace configmap-5743 deletion completed in 6.152707906s

• [SLOW TEST:16.518 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:46:11.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-242cd2c5-11ab-41c3-bc24-411d591570b2
STEP: Creating a pod to test consume secrets
Feb  6 14:46:11.358: INFO: Waiting up to 5m0s for pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39" in namespace "secrets-9804" to be "success or failure"
Feb  6 14:46:11.366: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223004ms
Feb  6 14:46:13.438: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080118567s
Feb  6 14:46:15.448: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090290653s
Feb  6 14:46:17.498: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140368707s
Feb  6 14:46:19.514: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155636266s
Feb  6 14:46:21.522: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163742162s
STEP: Saw pod success
Feb  6 14:46:21.522: INFO: Pod "pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39" satisfied condition "success or failure"
Feb  6 14:46:21.526: INFO: Trying to get logs from node iruya-node pod pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39 container secret-volume-test: 
STEP: delete the pod
Feb  6 14:46:21.648: INFO: Waiting for pod pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39 to disappear
Feb  6 14:46:21.653: INFO: Pod pod-secrets-f1d53530-2992-4836-94f2-6658d9b18a39 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:46:21.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9804" for this suite.
Feb  6 14:46:27.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:46:27.830: INFO: namespace secrets-9804 deletion completed in 6.168801531s

• [SLOW TEST:16.646 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:46:27.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 14:46:27.883: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:46:29.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-207" for this suite.
Feb  6 14:46:35.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:46:35.224: INFO: namespace custom-resource-definition-207 deletion completed in 6.165389409s

• [SLOW TEST:7.394 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:46:35.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0206 14:46:46.221612       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 14:46:46.221: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:46:46.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7834" for this suite.
Feb  6 14:47:02.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:47:02.992: INFO: namespace gc-7834 deletion completed in 16.766278971s

• [SLOW TEST:27.767 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:47:02.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  6 14:47:03.199: INFO: Waiting up to 5m0s for pod "pod-15067ddc-6394-4eef-9034-c917412235fc" in namespace "emptydir-522" to be "success or failure"
Feb  6 14:47:03.219: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.311752ms
Feb  6 14:47:05.225: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02652888s
Feb  6 14:47:07.239: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040540256s
Feb  6 14:47:09.250: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051037547s
Feb  6 14:47:11.260: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060947913s
Feb  6 14:47:13.269: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.069962096s
Feb  6 14:47:15.277: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.078354868s
STEP: Saw pod success
Feb  6 14:47:15.277: INFO: Pod "pod-15067ddc-6394-4eef-9034-c917412235fc" satisfied condition "success or failure"
Feb  6 14:47:15.283: INFO: Trying to get logs from node iruya-node pod pod-15067ddc-6394-4eef-9034-c917412235fc container test-container: 
STEP: delete the pod
Feb  6 14:47:15.566: INFO: Waiting for pod pod-15067ddc-6394-4eef-9034-c917412235fc to disappear
Feb  6 14:47:17.765: INFO: Pod pod-15067ddc-6394-4eef-9034-c917412235fc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:47:17.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-522" for this suite.
Feb  6 14:47:24.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:47:24.117: INFO: namespace emptydir-522 deletion completed in 6.330094038s

• [SLOW TEST:21.125 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:47:24.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:47:24.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5" in namespace "projected-576" to be "success or failure"
Feb  6 14:47:24.265: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 87.612182ms
Feb  6 14:47:26.273: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094955754s
Feb  6 14:47:28.280: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102092441s
Feb  6 14:47:30.290: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112453799s
Feb  6 14:47:32.298: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120128198s
Feb  6 14:47:34.305: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126819823s
STEP: Saw pod success
Feb  6 14:47:34.305: INFO: Pod "downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5" satisfied condition "success or failure"
Feb  6 14:47:34.313: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5 container client-container: 
STEP: delete the pod
Feb  6 14:47:34.440: INFO: Waiting for pod downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5 to disappear
Feb  6 14:47:34.452: INFO: Pod downwardapi-volume-d8c24ced-b213-4b06-89ab-0ba63558d5a5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:47:34.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-576" for this suite.
Feb  6 14:47:40.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:47:40.659: INFO: namespace projected-576 deletion completed in 6.200505396s

• [SLOW TEST:16.542 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:47:40.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  6 14:47:51.481: INFO: Successfully updated pod "annotationupdate5a31ef2f-84de-4e3b-a743-bb95e6e89131"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:47:54.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1638" for this suite.
Feb  6 14:48:16.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:48:16.303: INFO: namespace projected-1638 deletion completed in 22.22664992s

• [SLOW TEST:35.644 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:48:16.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  6 14:48:27.797: INFO: Successfully updated pod "pod-update-9bc4e8fb-9728-415a-bc82-78ef12b462f7"
STEP: verifying the updated pod is in kubernetes
Feb  6 14:48:27.813: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:48:27.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4573" for this suite.
Feb  6 14:48:49.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:48:50.000: INFO: namespace pods-4573 deletion completed in 22.177836087s

• [SLOW TEST:33.696 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:48:50.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb  6 14:48:50.082: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:49:04.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2097" for this suite.
Feb  6 14:49:11.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:49:11.138: INFO: namespace pods-2097 deletion completed in 6.122687027s

• [SLOW TEST:21.138 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:49:11.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  6 14:49:11.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9422'
Feb  6 14:49:13.023: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  6 14:49:13.023: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  6 14:49:13.038: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  6 14:49:13.109: INFO: scanned /root for discovery docs: 
Feb  6 14:49:13.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9422'
Feb  6 14:49:37.502: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  6 14:49:37.502: INFO: stdout: "Created e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881\nScaling up e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  6 14:49:37.502: INFO: stdout: "Created e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881\nScaling up e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  6 14:49:37.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9422'
Feb  6 14:49:37.633: INFO: stderr: ""
Feb  6 14:49:37.633: INFO: stdout: "e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881-f9lwk "
Feb  6 14:49:37.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881-f9lwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9422'
Feb  6 14:49:37.731: INFO: stderr: ""
Feb  6 14:49:37.731: INFO: stdout: "true"
Feb  6 14:49:37.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881-f9lwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9422'
Feb  6 14:49:37.893: INFO: stderr: ""
Feb  6 14:49:37.893: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  6 14:49:37.893: INFO: e2e-test-nginx-rc-8ce28a6b2bd7422c18bfb45701683881-f9lwk is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  6 14:49:37.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9422'
Feb  6 14:49:38.172: INFO: stderr: ""
Feb  6 14:49:38.172: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:49:38.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9422" for this suite.
Feb  6 14:50:00.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:50:00.450: INFO: namespace kubectl-9422 deletion completed in 22.205983839s

• [SLOW TEST:49.312 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:50:00.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  6 14:50:10.705: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:50:10.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-775" for this suite.
Feb  6 14:50:16.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:50:17.174: INFO: namespace container-runtime-775 deletion completed in 6.379720845s

• [SLOW TEST:16.724 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:50:17.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  6 14:50:17.327: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2046" to be "success or failure"
Feb  6 14:50:17.338: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.751916ms
Feb  6 14:50:19.347: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019776158s
Feb  6 14:50:21.354: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026066449s
Feb  6 14:50:23.370: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042756867s
Feb  6 14:50:25.379: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051101643s
Feb  6 14:50:27.391: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063440007s
Feb  6 14:50:29.403: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.075721176s
Feb  6 14:50:32.034: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.706291596s
STEP: Saw pod success
Feb  6 14:50:32.034: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  6 14:50:32.048: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  6 14:50:32.120: INFO: Waiting for pod pod-host-path-test to disappear
Feb  6 14:50:32.124: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:50:32.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2046" for this suite.
Feb  6 14:50:38.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:50:38.407: INFO: namespace hostpath-2046 deletion completed in 6.276262291s

• [SLOW TEST:21.232 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:50:38.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  6 14:50:38.596: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  6 14:50:38.604: INFO: Waiting for terminating namespaces to be deleted...
Feb  6 14:50:38.607: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  6 14:50:38.624: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  6 14:50:38.624: INFO: 	Container weave ready: true, restart count 0
Feb  6 14:50:38.624: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 14:50:38.624: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.624: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 14:50:38.624: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  6 14:50:38.637: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  6 14:50:38.637: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 14:50:38.637: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  6 14:50:38.637: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  6 14:50:38.637: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container coredns ready: true, restart count 0
Feb  6 14:50:38.637: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container etcd ready: true, restart count 0
Feb  6 14:50:38.637: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container weave ready: true, restart count 0
Feb  6 14:50:38.637: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 14:50:38.637: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  6 14:50:38.637: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  6 14:50:38.803: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  6 14:50:38.803: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c.15f0d7b45f1f1e64], Reason = [Scheduled], Message = [Successfully assigned sched-pred-611/filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c.15f0d7b5bb90a81b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c.15f0d7b6b6c35b5e], Reason = [Created], Message = [Created container filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c.15f0d7b6df1a4d30], Reason = [Started], Message = [Started container filler-pod-14ffe37f-abb2-457e-abb6-413948f28c4c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c.15f0d7b45f0551ba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-611/filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c.15f0d7b58925540b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c.15f0d7b65bd6736c], Reason = [Created], Message = [Created container filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c.15f0d7b67b184072], Reason = [Started], Message = [Started container filler-pod-6ed70f4d-f685-4ad8-aa98-11f0a609606c]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f0d7b72e0200c5], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:50:52.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-611" for this suite.
Feb  6 14:50:59.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:51:00.387: INFO: namespace sched-pred-611 deletion completed in 8.329224425s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.980 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:51:00.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3175
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 14:51:00.470: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 14:51:44.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3175 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 14:51:44.739: INFO: >>> kubeConfig: /root/.kube/config
I0206 14:51:44.874265       8 log.go:172] (0xc000826a50) (0xc001a177c0) Create stream
I0206 14:51:44.874368       8 log.go:172] (0xc000826a50) (0xc001a177c0) Stream added, broadcasting: 1
I0206 14:51:44.891995       8 log.go:172] (0xc000826a50) Reply frame received for 1
I0206 14:51:44.892080       8 log.go:172] (0xc000826a50) (0xc002d82aa0) Create stream
I0206 14:51:44.892108       8 log.go:172] (0xc000826a50) (0xc002d82aa0) Stream added, broadcasting: 3
I0206 14:51:44.895827       8 log.go:172] (0xc000826a50) Reply frame received for 3
I0206 14:51:44.895904       8 log.go:172] (0xc000826a50) (0xc001a17860) Create stream
I0206 14:51:44.895940       8 log.go:172] (0xc000826a50) (0xc001a17860) Stream added, broadcasting: 5
I0206 14:51:44.897511       8 log.go:172] (0xc000826a50) Reply frame received for 5
I0206 14:51:45.082454       8 log.go:172] (0xc000826a50) Data frame received for 3
I0206 14:51:45.082541       8 log.go:172] (0xc002d82aa0) (3) Data frame handling
I0206 14:51:45.082587       8 log.go:172] (0xc002d82aa0) (3) Data frame sent
I0206 14:51:45.289014       8 log.go:172] (0xc000826a50) (0xc002d82aa0) Stream removed, broadcasting: 3
I0206 14:51:45.289268       8 log.go:172] (0xc000826a50) (0xc001a17860) Stream removed, broadcasting: 5
I0206 14:51:45.289322       8 log.go:172] (0xc000826a50) Data frame received for 1
I0206 14:51:45.289355       8 log.go:172] (0xc001a177c0) (1) Data frame handling
I0206 14:51:45.289380       8 log.go:172] (0xc001a177c0) (1) Data frame sent
I0206 14:51:45.289403       8 log.go:172] (0xc000826a50) (0xc001a177c0) Stream removed, broadcasting: 1
I0206 14:51:45.289417       8 log.go:172] (0xc000826a50) Go away received
I0206 14:51:45.290096       8 log.go:172] (0xc000826a50) (0xc001a177c0) Stream removed, broadcasting: 1
I0206 14:51:45.290177       8 log.go:172] (0xc000826a50) (0xc002d82aa0) Stream removed, broadcasting: 3
I0206 14:51:45.290185       8 log.go:172] (0xc000826a50) (0xc001a17860) Stream removed, broadcasting: 5
Feb  6 14:51:45.290: INFO: Waiting for endpoints: map[]
Feb  6 14:51:45.319: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3175 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 14:51:45.319: INFO: >>> kubeConfig: /root/.kube/config
I0206 14:51:45.382817       8 log.go:172] (0xc001742a50) (0xc002a0e6e0) Create stream
I0206 14:51:45.382852       8 log.go:172] (0xc001742a50) (0xc002a0e6e0) Stream added, broadcasting: 1
I0206 14:51:45.390617       8 log.go:172] (0xc001742a50) Reply frame received for 1
I0206 14:51:45.390644       8 log.go:172] (0xc001742a50) (0xc002d82be0) Create stream
I0206 14:51:45.390654       8 log.go:172] (0xc001742a50) (0xc002d82be0) Stream added, broadcasting: 3
I0206 14:51:45.391847       8 log.go:172] (0xc001742a50) Reply frame received for 3
I0206 14:51:45.391866       8 log.go:172] (0xc001742a50) (0xc002d82dc0) Create stream
I0206 14:51:45.391874       8 log.go:172] (0xc001742a50) (0xc002d82dc0) Stream added, broadcasting: 5
I0206 14:51:45.392940       8 log.go:172] (0xc001742a50) Reply frame received for 5
I0206 14:51:45.624551       8 log.go:172] (0xc001742a50) Data frame received for 3
I0206 14:51:45.624670       8 log.go:172] (0xc002d82be0) (3) Data frame handling
I0206 14:51:45.624688       8 log.go:172] (0xc002d82be0) (3) Data frame sent
I0206 14:51:45.768926       8 log.go:172] (0xc001742a50) Data frame received for 1
I0206 14:51:45.769011       8 log.go:172] (0xc002a0e6e0) (1) Data frame handling
I0206 14:51:45.769031       8 log.go:172] (0xc002a0e6e0) (1) Data frame sent
I0206 14:51:45.770271       8 log.go:172] (0xc001742a50) (0xc002d82dc0) Stream removed, broadcasting: 5
I0206 14:51:45.770409       8 log.go:172] (0xc001742a50) (0xc002a0e6e0) Stream removed, broadcasting: 1
I0206 14:51:45.770508       8 log.go:172] (0xc001742a50) (0xc002d82be0) Stream removed, broadcasting: 3
I0206 14:51:45.770572       8 log.go:172] (0xc001742a50) Go away received
I0206 14:51:45.770657       8 log.go:172] (0xc001742a50) (0xc002a0e6e0) Stream removed, broadcasting: 1
I0206 14:51:45.770708       8 log.go:172] (0xc001742a50) (0xc002d82be0) Stream removed, broadcasting: 3
I0206 14:51:45.770718       8 log.go:172] (0xc001742a50) (0xc002d82dc0) Stream removed, broadcasting: 5
Feb  6 14:51:45.770: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:51:45.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3175" for this suite.
Feb  6 14:52:11.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:52:11.971: INFO: namespace pod-network-test-3175 deletion completed in 26.190808773s

• [SLOW TEST:71.583 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:52:11.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  6 14:52:14.115: INFO: Pod name wrapped-volume-race-f6a76a08-7cad-448c-9a13-44d4db8174fc: Found 0 pods out of 5
Feb  6 14:52:19.231: INFO: Pod name wrapped-volume-race-f6a76a08-7cad-448c-9a13-44d4db8174fc: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-f6a76a08-7cad-448c-9a13-44d4db8174fc in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Feb  6 14:52:53.370: INFO: Deleting ReplicationController wrapped-volume-race-f6a76a08-7cad-448c-9a13-44d4db8174fc took: 17.008311ms
Feb  6 14:52:53.771: INFO: Terminating ReplicationController wrapped-volume-race-f6a76a08-7cad-448c-9a13-44d4db8174fc pods took: 400.603431ms
STEP: Creating RC which spawns configmap-volume pods
Feb  6 14:53:47.208: INFO: Pod name wrapped-volume-race-bf912c87-f1b3-4b29-9eb1-4580925b3c94: Found 0 pods out of 5
Feb  6 14:53:52.223: INFO: Pod name wrapped-volume-race-bf912c87-f1b3-4b29-9eb1-4580925b3c94: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bf912c87-f1b3-4b29-9eb1-4580925b3c94 in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Feb  6 14:54:20.367: INFO: Deleting ReplicationController wrapped-volume-race-bf912c87-f1b3-4b29-9eb1-4580925b3c94 took: 21.778764ms
Feb  6 14:54:20.768: INFO: Terminating ReplicationController wrapped-volume-race-bf912c87-f1b3-4b29-9eb1-4580925b3c94 pods took: 400.76645ms
STEP: Creating RC which spawns configmap-volume pods
Feb  6 14:55:07.661: INFO: Pod name wrapped-volume-race-7e75edfa-623c-406a-9a85-1216a496e2c3: Found 0 pods out of 5
Feb  6 14:55:12.683: INFO: Pod name wrapped-volume-race-7e75edfa-623c-406a-9a85-1216a496e2c3: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7e75edfa-623c-406a-9a85-1216a496e2c3 in namespace emptydir-wrapper-322, will wait for the garbage collector to delete the pods
Feb  6 14:55:40.934: INFO: Deleting ReplicationController wrapped-volume-race-7e75edfa-623c-406a-9a85-1216a496e2c3 took: 11.371046ms
Feb  6 14:55:41.335: INFO: Terminating ReplicationController wrapped-volume-race-7e75edfa-623c-406a-9a85-1216a496e2c3 pods took: 400.69781ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:56:28.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-322" for this suite.
Feb  6 14:56:40.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:56:40.770: INFO: namespace emptydir-wrapper-322 deletion completed in 12.150705659s

• [SLOW TEST:268.799 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:56:40.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  6 14:56:40.985: INFO: Waiting up to 5m0s for pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3" in namespace "downward-api-573" to be "success or failure"
Feb  6 14:56:40.991: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123091ms
Feb  6 14:56:43.002: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017040817s
Feb  6 14:56:45.014: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028611376s
Feb  6 14:56:47.019: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034064981s
Feb  6 14:56:49.030: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044877254s
Feb  6 14:56:51.044: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05893461s
STEP: Saw pod success
Feb  6 14:56:51.044: INFO: Pod "downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3" satisfied condition "success or failure"
Feb  6 14:56:51.050: INFO: Trying to get logs from node iruya-node pod downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3 container dapi-container: 
STEP: delete the pod
Feb  6 14:56:51.189: INFO: Waiting for pod downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3 to disappear
Feb  6 14:56:51.211: INFO: Pod downward-api-51c5c131-84f0-4b01-8de7-5d46b5ef94a3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:56:51.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-573" for this suite.
Feb  6 14:56:57.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:56:57.402: INFO: namespace downward-api-573 deletion completed in 6.184782004s

• [SLOW TEST:16.630 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:56:57.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  6 14:56:57.522: INFO: Waiting up to 5m0s for pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6" in namespace "downward-api-842" to be "success or failure"
Feb  6 14:56:57.531: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.9272ms
Feb  6 14:56:59.542: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019746778s
Feb  6 14:57:01.556: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033628481s
Feb  6 14:57:03.566: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043686781s
Feb  6 14:57:05.575: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052248575s
STEP: Saw pod success
Feb  6 14:57:05.575: INFO: Pod "downward-api-b5295800-8418-48ad-91fd-60e0797df9b6" satisfied condition "success or failure"
Feb  6 14:57:05.579: INFO: Trying to get logs from node iruya-node pod downward-api-b5295800-8418-48ad-91fd-60e0797df9b6 container dapi-container: 
STEP: delete the pod
Feb  6 14:57:05.639: INFO: Waiting for pod downward-api-b5295800-8418-48ad-91fd-60e0797df9b6 to disappear
Feb  6 14:57:05.651: INFO: Pod downward-api-b5295800-8418-48ad-91fd-60e0797df9b6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:57:05.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-842" for this suite.
Feb  6 14:57:11.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:57:11.966: INFO: namespace downward-api-842 deletion completed in 6.248627172s

• [SLOW TEST:14.563 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:57:11.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:57:12.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5" in namespace "projected-9261" to be "success or failure"
Feb  6 14:57:12.568: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5": Phase="Pending", Reason="", readiness=false. Elapsed: 478.573424ms
Feb  6 14:57:14.583: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493380863s
Feb  6 14:57:16.596: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506709607s
Feb  6 14:57:18.642: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.552948192s
Feb  6 14:57:20.655: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.565294166s
STEP: Saw pod success
Feb  6 14:57:20.655: INFO: Pod "downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5" satisfied condition "success or failure"
Feb  6 14:57:20.663: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5 container client-container: 
STEP: delete the pod
Feb  6 14:57:20.787: INFO: Waiting for pod downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5 to disappear
Feb  6 14:57:20.801: INFO: Pod downwardapi-volume-f5794aa0-9f35-4a2c-968c-2837701646d5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:57:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9261" for this suite.
Feb  6 14:57:26.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:57:26.961: INFO: namespace projected-9261 deletion completed in 6.152133469s

• [SLOW TEST:14.994 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:57:26.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 14:57:27.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500" in namespace "downward-api-9530" to be "success or failure"
Feb  6 14:57:27.057: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500": Phase="Pending", Reason="", readiness=false. Elapsed: 10.257159ms
Feb  6 14:57:29.066: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019601985s
Feb  6 14:57:31.078: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031335923s
Feb  6 14:57:33.088: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041538571s
Feb  6 14:57:35.104: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057237912s
STEP: Saw pod success
Feb  6 14:57:35.104: INFO: Pod "downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500" satisfied condition "success or failure"
Feb  6 14:57:35.111: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500 container client-container: 
STEP: delete the pod
Feb  6 14:57:35.230: INFO: Waiting for pod downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500 to disappear
Feb  6 14:57:35.241: INFO: Pod downwardapi-volume-95101a3b-a23c-43ef-8eb3-b07be84a6500 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:57:35.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9530" for this suite.
Feb  6 14:57:41.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:57:41.603: INFO: namespace downward-api-9530 deletion completed in 6.355411373s

• [SLOW TEST:14.642 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:57:41.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  6 14:57:41.712: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334027,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 14:57:41.713: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334027,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  6 14:57:51.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334041,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  6 14:57:51.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334041,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  6 14:58:02.360: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334056,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 14:58:02.360: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334056,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  6 14:58:12.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334071,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  6 14:58:12.374: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-a,UID:db3ce37c-7686-4ab6-8312-fe9a745caa54,ResourceVersion:23334071,Generation:0,CreationTimestamp:2020-02-06 14:57:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  6 14:58:22.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-b,UID:2cb244b4-1689-4089-aa6d-9749c1e49d5d,ResourceVersion:23334085,Generation:0,CreationTimestamp:2020-02-06 14:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 14:58:22.413: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-b,UID:2cb244b4-1689-4089-aa6d-9749c1e49d5d,ResourceVersion:23334085,Generation:0,CreationTimestamp:2020-02-06 14:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  6 14:58:32.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-b,UID:2cb244b4-1689-4089-aa6d-9749c1e49d5d,ResourceVersion:23334099,Generation:0,CreationTimestamp:2020-02-06 14:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  6 14:58:32.429: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-44,SelfLink:/api/v1/namespaces/watch-44/configmaps/e2e-watch-test-configmap-b,UID:2cb244b4-1689-4089-aa6d-9749c1e49d5d,ResourceVersion:23334099,Generation:0,CreationTimestamp:2020-02-06 14:58:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 14:58:42.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-44" for this suite.
Feb  6 14:58:48.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 14:58:48.583: INFO: namespace watch-44 deletion completed in 6.139253873s

• [SLOW TEST:66.979 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 14:58:48.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-57370807-1d2f-4adb-91fa-af19a9a85444
STEP: Creating secret with name s-test-opt-upd-2806dafb-59ff-4259-a3b0-de5593281569
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-57370807-1d2f-4adb-91fa-af19a9a85444
STEP: Updating secret s-test-opt-upd-2806dafb-59ff-4259-a3b0-de5593281569
STEP: Creating secret with name s-test-opt-create-43d44745-6b04-491e-a6d4-746c84379e3b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:00:29.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1809" for this suite.
Feb  6 15:00:51.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:00:51.957: INFO: namespace projected-1809 deletion completed in 22.257599122s

• [SLOW TEST:123.374 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:00:51.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  6 15:01:04.146: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:01:04.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4393" for this suite.
Feb  6 15:01:10.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:01:10.547: INFO: namespace container-runtime-4393 deletion completed in 6.359698018s

• [SLOW TEST:18.589 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:01:10.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  6 15:01:10.657: INFO: Waiting up to 5m0s for pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca" in namespace "emptydir-5800" to be "success or failure"
Feb  6 15:01:10.661: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.750725ms
Feb  6 15:01:12.667: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010434395s
Feb  6 15:01:14.707: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049755785s
Feb  6 15:01:16.714: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056901994s
Feb  6 15:01:18.729: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.071493776s
STEP: Saw pod success
Feb  6 15:01:18.729: INFO: Pod "pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca" satisfied condition "success or failure"
Feb  6 15:01:18.735: INFO: Trying to get logs from node iruya-node pod pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca container test-container: 
STEP: delete the pod
Feb  6 15:01:19.073: INFO: Waiting for pod pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca to disappear
Feb  6 15:01:19.094: INFO: Pod pod-aad35d31-c423-46d4-a7d0-97f4d0c797ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:01:19.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5800" for this suite.
Feb  6 15:01:25.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:01:25.251: INFO: namespace emptydir-5800 deletion completed in 6.152037515s

• [SLOW TEST:14.704 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:01:25.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  6 15:01:25.297: INFO: PodSpec: initContainers in spec.initContainers
Feb  6 15:02:26.323: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-237ae217-8eea-473c-9273-d88ddf8e9357", GenerateName:"", Namespace:"init-container-5487", SelfLink:"/api/v1/namespaces/init-container-5487/pods/pod-init-237ae217-8eea-473c-9273-d88ddf8e9357", UID:"6b02bf44-7062-4acb-a7f7-cf3e308c689a", ResourceVersion:"23334531", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716598085, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"297527576"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8bh26", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002976000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8bh26", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8bh26", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8bh26", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d5c088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fd4060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d5c120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d5c140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d5c148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d5c14c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716598085, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716598085, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716598085, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716598085, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc002a64080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00241ce70)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00241cee0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://5df6ea88c83c8c65381848b500757faf7528303abc69e3ecdbcccc0e11498d86"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a640c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002a640a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:02:26.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5487" for this suite.
Feb  6 15:02:48.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:02:48.561: INFO: namespace init-container-5487 deletion completed in 22.213769581s

• [SLOW TEST:83.310 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:02:48.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-acbc2e8c-6d1b-4685-9e34-05d86adcf7fd
STEP: Creating secret with name secret-projected-all-test-volume-8e74d3d2-0737-47f5-a13b-47a953de0ff0
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  6 15:02:48.718: INFO: Waiting up to 5m0s for pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204" in namespace "projected-3095" to be "success or failure"
Feb  6 15:02:48.722: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439395ms
Feb  6 15:02:50.733: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015264256s
Feb  6 15:02:52.750: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031862173s
Feb  6 15:02:54.763: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045227593s
Feb  6 15:02:56.770: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052415193s
STEP: Saw pod success
Feb  6 15:02:56.770: INFO: Pod "projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204" satisfied condition "success or failure"
Feb  6 15:02:56.775: INFO: Trying to get logs from node iruya-node pod projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204 container projected-all-volume-test: 
STEP: delete the pod
Feb  6 15:02:56.917: INFO: Waiting for pod projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204 to disappear
Feb  6 15:02:56.925: INFO: Pod projected-volume-bf55d102-eea1-402c-b3a9-b25789f00204 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:02:56.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3095" for this suite.
Feb  6 15:03:02.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:03:03.101: INFO: namespace projected-3095 deletion completed in 6.168695835s

• [SLOW TEST:14.540 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:03:03.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 15:03:03.222: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  6 15:03:06.695: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:03:06.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6496" for this suite.
Feb  6 15:03:18.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:03:19.155: INFO: namespace replication-controller-6496 deletion completed in 12.22472547s

• [SLOW TEST:16.054 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:03:19.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  6 15:03:35.427: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 15:03:35.456: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 15:03:37.457: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 15:03:37.465: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 15:03:39.457: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 15:03:39.464: INFO: Pod pod-with-prestop-http-hook still exists
Feb  6 15:03:41.457: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb  6 15:03:41.464: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:03:41.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3456" for this suite.
Feb  6 15:04:03.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:04:03.792: INFO: namespace container-lifecycle-hook-3456 deletion completed in 22.298165729s

• [SLOW TEST:44.637 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:04:03.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-715e38b3-cbae-4c85-af98-6f4a61a64ed6 in namespace container-probe-1402
Feb  6 15:04:11.983: INFO: Started pod test-webserver-715e38b3-cbae-4c85-af98-6f4a61a64ed6 in namespace container-probe-1402
STEP: checking the pod's current state and verifying that restartCount is present
Feb  6 15:04:11.988: INFO: Initial restart count of pod test-webserver-715e38b3-cbae-4c85-af98-6f4a61a64ed6 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:08:12.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1402" for this suite.
Feb  6 15:08:18.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:08:18.541: INFO: namespace container-probe-1402 deletion completed in 6.182196032s

• [SLOW TEST:254.749 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:08:18.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  6 15:08:18.639: INFO: Waiting up to 5m0s for pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3" in namespace "downward-api-1923" to be "success or failure"
Feb  6 15:08:18.661: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.003628ms
Feb  6 15:08:20.676: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036927788s
Feb  6 15:08:22.682: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043767056s
Feb  6 15:08:24.700: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061132825s
Feb  6 15:08:26.713: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074622245s
STEP: Saw pod success
Feb  6 15:08:26.714: INFO: Pod "downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3" satisfied condition "success or failure"
Feb  6 15:08:26.731: INFO: Trying to get logs from node iruya-node pod downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3 container dapi-container: 
STEP: delete the pod
Feb  6 15:08:26.821: INFO: Waiting for pod downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3 to disappear
Feb  6 15:08:26.862: INFO: Pod downward-api-385096ff-b48d-4691-b7fb-ef503af6a6c3 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:08:26.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1923" for this suite.
Feb  6 15:08:32.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:08:32.999: INFO: namespace downward-api-1923 deletion completed in 6.129784775s

• [SLOW TEST:14.457 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:08:33.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  6 15:08:33.114: INFO: namespace kubectl-8264
Feb  6 15:08:33.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8264'
Feb  6 15:08:35.231: INFO: stderr: ""
Feb  6 15:08:35.232: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  6 15:08:36.244: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:36.244: INFO: Found 0 / 1
Feb  6 15:08:37.243: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:37.244: INFO: Found 0 / 1
Feb  6 15:08:38.245: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:38.245: INFO: Found 0 / 1
Feb  6 15:08:39.255: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:39.255: INFO: Found 0 / 1
Feb  6 15:08:40.245: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:40.245: INFO: Found 0 / 1
Feb  6 15:08:41.257: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:41.257: INFO: Found 0 / 1
Feb  6 15:08:42.243: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:42.243: INFO: Found 0 / 1
Feb  6 15:08:43.241: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:43.241: INFO: Found 0 / 1
Feb  6 15:08:44.251: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:44.251: INFO: Found 1 / 1
Feb  6 15:08:44.251: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  6 15:08:44.259: INFO: Selector matched 1 pods for map[app:redis]
Feb  6 15:08:44.259: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  6 15:08:44.259: INFO: wait on redis-master startup in kubectl-8264 
Feb  6 15:08:44.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-drcnj redis-master --namespace=kubectl-8264'
Feb  6 15:08:44.414: INFO: stderr: ""
Feb  6 15:08:44.414: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Feb 15:08:42.525 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Feb 15:08:42.525 # Server started, Redis version 3.2.12\n1:M 06 Feb 15:08:42.526 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Feb 15:08:42.526 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  6 15:08:44.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8264'
Feb  6 15:08:44.867: INFO: stderr: ""
Feb  6 15:08:44.867: INFO: stdout: "service/rm2 exposed\n"
Feb  6 15:08:44.893: INFO: Service rm2 in namespace kubectl-8264 found.
STEP: exposing service
Feb  6 15:08:46.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8264'
Feb  6 15:08:47.176: INFO: stderr: ""
Feb  6 15:08:47.176: INFO: stdout: "service/rm3 exposed\n"
Feb  6 15:08:47.188: INFO: Service rm3 in namespace kubectl-8264 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:08:49.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8264" for this suite.
Feb  6 15:09:11.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:09:11.333: INFO: namespace kubectl-8264 deletion completed in 22.126897968s

• [SLOW TEST:38.333 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:09:11.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 15:09:11.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  6 15:09:11.591: INFO: stderr: ""
Feb  6 15:09:11.591: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:09:11.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9671" for this suite.
Feb  6 15:09:17.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:09:17.764: INFO: namespace kubectl-9671 deletion completed in 6.165190094s

• [SLOW TEST:6.430 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:09:17.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  6 15:09:18.499: INFO: created pod pod-service-account-defaultsa
Feb  6 15:09:18.499: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  6 15:09:18.583: INFO: created pod pod-service-account-mountsa
Feb  6 15:09:18.583: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  6 15:09:18.597: INFO: created pod pod-service-account-nomountsa
Feb  6 15:09:18.597: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  6 15:09:18.650: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  6 15:09:18.650: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  6 15:09:18.832: INFO: created pod pod-service-account-mountsa-mountspec
Feb  6 15:09:18.832: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  6 15:09:18.869: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  6 15:09:18.869: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  6 15:09:19.075: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  6 15:09:19.076: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  6 15:09:19.088: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  6 15:09:19.088: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  6 15:09:19.178: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  6 15:09:19.178: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:09:19.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2801" for this suite.
Feb  6 15:09:59.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:09:59.334: INFO: namespace svcaccounts-2801 deletion completed in 39.994907001s

• [SLOW TEST:41.571 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:09:59.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5693
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5693
STEP: Creating statefulset with conflicting port in namespace statefulset-5693
STEP: Waiting until pod test-pod will start running in namespace statefulset-5693
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5693
Feb  6 15:10:07.510: INFO: Observed stateful pod in namespace: statefulset-5693, name: ss-0, uid: ebd75bbb-a11d-495c-a599-1bb721ccb2ff, status phase: Pending. Waiting for statefulset controller to delete.
Feb  6 15:15:07.510: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  6 15:15:07.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-5693'
Feb  6 15:15:07.772: INFO: stderr: ""
Feb  6 15:15:07.773: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-5693\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wlgtw (ro)\nVolumes:\n  default-token-wlgtw:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-wlgtw\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Feb  6 15:15:07.773: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-5693
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wlgtw (ro)
Volumes:
  default-token-wlgtw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wlgtw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Feb  6 15:15:07.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-5693 --tail=100'
Feb  6 15:15:07.998: INFO: rc: 1
Feb  6 15:15:07.998: INFO: 
Last 100 log lines of ss-0:

Feb  6 15:15:07.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-5693'
Feb  6 15:15:08.176: INFO: stderr: ""
Feb  6 15:15:08.176: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-5693\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Thu, 06 Feb 2020 15:09:59 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://5d7667f4e2622356e2774098a8d3e2fd5ca7a698676e58cdae5dbd7a04c79c07\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Thu, 06 Feb 2020 15:10:06 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wlgtw (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-wlgtw:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-wlgtw\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m5s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m3s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m2s  kubelet, iruya-node  Started container nginx\n"
Feb  6 15:15:08.176: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-5693
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Thu, 06 Feb 2020 15:09:59 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://5d7667f4e2622356e2774098a8d3e2fd5ca7a698676e58cdae5dbd7a04c79c07
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Thu, 06 Feb 2020 15:10:06 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wlgtw (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-wlgtw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wlgtw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m5s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m3s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m2s  kubelet, iruya-node  Started container nginx

Feb  6 15:15:08.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-5693 --tail=100'
Feb  6 15:15:08.339: INFO: stderr: ""
Feb  6 15:15:08.339: INFO: stdout: ""
Feb  6 15:15:08.339: INFO: 
Last 100 log lines of test-pod:

Feb  6 15:15:08.339: INFO: Deleting all statefulset in ns statefulset-5693
Feb  6 15:15:08.344: INFO: Scaling statefulset ss to 0
Feb  6 15:15:18.402: INFO: Waiting for statefulset status.replicas updated to 0
Feb  6 15:15:18.411: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-5693".
STEP: Found 8 events.
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:09:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:09:59 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-5693/ss is recreating failed Pod ss-0
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:09:59 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:09:59 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:09:59 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:10:03 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:10:05 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Feb  6 15:15:18.482: INFO: At 2020-02-06 15:10:06 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Feb  6 15:15:18.516: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Feb  6 15:15:18.516: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:09:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:10:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:10:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:09:59 +0000 UTC  }]
Feb  6 15:15:18.516: INFO: 
Feb  6 15:15:18.528: INFO: 
Logging node info for node iruya-node
Feb  6 15:15:18.533: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:23335896,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-06 15:15:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-06 15:15:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-06 15:15:05 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-06 15:15:05 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb  6 15:15:18.534: INFO: 
Logging kubelet events for node iruya-node
Feb  6 15:15:18.539: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Feb  6 15:15:18.574: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Feb  6 15:15:18.574: INFO: 	Container weave ready: true, restart count 0
Feb  6 15:15:18.574: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 15:15:18.574: INFO: test-pod started at 2020-02-06 15:09:59 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.574: INFO: 	Container nginx ready: true, restart count 0
Feb  6 15:15:18.574: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.574: INFO: 	Container kube-proxy ready: true, restart count 0
W0206 15:15:18.582061       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 15:15:18.737: INFO: 
Latency metrics for node iruya-node
Feb  6 15:15:18.737: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Feb  6 15:15:18.744: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:23335866,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-06 15:14:42 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-06 15:14:42 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-06 15:14:42 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-06 15:14:42 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb  6 15:15:18.744: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Feb  6 15:15:18.748: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Feb  6 15:15:18.761: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container coredns ready: true, restart count 0
Feb  6 15:15:18.761: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container etcd ready: true, restart count 0
Feb  6 15:15:18.761: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container weave ready: true, restart count 0
Feb  6 15:15:18.761: INFO: 	Container weave-npc ready: true, restart count 0
Feb  6 15:15:18.761: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  6 15:15:18.761: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  6 15:15:18.761: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  6 15:15:18.761: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  6 15:15:18.761: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Feb  6 15:15:18.761: INFO: 	Container coredns ready: true, restart count 0
W0206 15:15:18.766103       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  6 15:15:18.843: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Feb  6 15:15:18.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5693" for this suite.
Feb  6 15:15:42.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:15:43.148: INFO: namespace statefulset-5693 deletion completed in 24.297035657s

• Failure [343.813 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Feb  6 15:15:07.510: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:15:43.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6853.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6853.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6853.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6853.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6853.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6853.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  6 15:15:55.461: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.470: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.487: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6853.svc.cluster.local from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.491: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.495: INFO: Unable to read jessie_udp@PodARecord from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.500: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f: the server could not find the requested resource (get pods dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f)
Feb  6 15:15:55.500: INFO: Lookups using dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6853.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  6 15:16:00.595: INFO: DNS probes using dns-6853/dns-test-32f86334-1d2e-4c42-83ed-72ec7fa7f67f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:16:01.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6853" for this suite.
Feb  6 15:16:07.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:16:07.698: INFO: namespace dns-6853 deletion completed in 6.377639493s

• [SLOW TEST:24.550 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:16:07.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 15:16:07.736: INFO: Creating deployment "nginx-deployment"
Feb  6 15:16:07.752: INFO: Waiting for observed generation 1
Feb  6 15:16:09.900: INFO: Waiting for all required pods to come up
Feb  6 15:16:11.141: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  6 15:16:39.192: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  6 15:16:39.204: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  6 15:16:39.218: INFO: Updating deployment nginx-deployment
Feb  6 15:16:39.218: INFO: Waiting for observed generation 2
Feb  6 15:16:41.378: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  6 15:16:42.582: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  6 15:16:43.844: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  6 15:16:44.656: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  6 15:16:44.656: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  6 15:16:44.731: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  6 15:16:46.180: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  6 15:16:46.180: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  6 15:16:46.811: INFO: Updating deployment nginx-deployment
Feb  6 15:16:46.812: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  6 15:16:47.004: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  6 15:16:50.125: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  6 15:16:53.599: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6853,SelfLink:/apis/apps/v1/namespaces/deployment-6853/deployments/nginx-deployment,UID:a72570ff-a52a-4135-b756-740d12c1b990,ResourceVersion:23336330,Generation:3,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-06 15:16:44 +0000 UTC 2020-02-06 15:16:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-06 15:16:47 +0000 UTC 2020-02-06 15:16:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  6 15:16:55.071: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6853,SelfLink:/apis/apps/v1/namespaces/deployment-6853/replicasets/nginx-deployment-55fb7cb77f,UID:45bf6c96-de5f-4a22-8ee6-a5062d93e1ff,ResourceVersion:23336342,Generation:3,CreationTimestamp:2020-02-06 15:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a72570ff-a52a-4135-b756-740d12c1b990 0xc00124ee47 0xc00124ee48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  6 15:16:55.072: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  6 15:16:55.072: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6853,SelfLink:/apis/apps/v1/namespaces/deployment-6853/replicasets/nginx-deployment-7b8c6f4498,UID:d58a08aa-02db-4380-a9a3-82c7c9c3ea73,ResourceVersion:23336326,Generation:3,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment a72570ff-a52a-4135-b756-740d12c1b990 0xc00124f0a7 0xc00124f0a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  6 15:16:55.878: INFO: Pod "nginx-deployment-55fb7cb77f-2bc4x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2bc4x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-2bc4x,UID:ac3eacf0-5fa4-4b2a-8adb-ac889803a0df,ResourceVersion:23336315,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc0026a9a67 0xc0026a9a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026a9ae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026a9b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.878: INFO: Pod "nginx-deployment-55fb7cb77f-2fhkn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2fhkn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-2fhkn,UID:f0866a87-71cb-4388-bc24-377382a52336,ResourceVersion:23336328,Generation:0,CreationTimestamp:2020-02-06 15:16:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc0026a9b87 0xc0026a9b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026a9c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026a9c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.878: INFO: Pod "nginx-deployment-55fb7cb77f-2gh8z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2gh8z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-2gh8z,UID:6b3bbde8-c019-4e70-98aa-60646beaa771,ResourceVersion:23336256,Generation:0,CreationTimestamp:2020-02-06 15:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc0026a9ca7 0xc0026a9ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026a9d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026a9d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-06 15:16:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.879: INFO: Pod "nginx-deployment-55fb7cb77f-2jjtm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2jjtm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-2jjtm,UID:9d589427-b987-47a9-b233-289769b192e4,ResourceVersion:23336237,Generation:0,CreationTimestamp:2020-02-06 15:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc0026a9e07 0xc0026a9e08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026a9e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026a9ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.879: INFO: Pod "nginx-deployment-55fb7cb77f-4dmg7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4dmg7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-4dmg7,UID:a16d980d-5b07-4446-b1dd-5b06bd3f1065,ResourceVersion:23336296,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc0026a9f77 0xc0026a9f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026a9fe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.879: INFO: Pod "nginx-deployment-55fb7cb77f-4nzcq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4nzcq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-4nzcq,UID:3728ba01-552a-4bc5-8e2a-3675db8b9188,ResourceVersion:23336263,Generation:0,CreationTimestamp:2020-02-06 15:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a087 0xc001e3a088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-06 15:16:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.880: INFO: Pod "nginx-deployment-55fb7cb77f-5mxz7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5mxz7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-5mxz7,UID:0ca8c420-d883-4873-8631-61e60fe9bdf7,ResourceVersion:23336262,Generation:0,CreationTimestamp:2020-02-06 15:16:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a207 0xc001e3a208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:42 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.880: INFO: Pod "nginx-deployment-55fb7cb77f-7jfns" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7jfns,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-7jfns,UID:63b3870b-f4b0-4de1-95e4-39efb8527810,ResourceVersion:23336321,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a377 0xc001e3a378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.880: INFO: Pod "nginx-deployment-55fb7cb77f-dxljb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dxljb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-dxljb,UID:14d93368-03b5-4e82-8e19-4da6785c3047,ResourceVersion:23336316,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a537 0xc001e3a538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a600} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a620}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.881: INFO: Pod "nginx-deployment-55fb7cb77f-t4bwk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t4bwk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-t4bwk,UID:47691f86-91f5-4257-97f3-4c992556b818,ResourceVersion:23336339,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a6d7 0xc001e3a6d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3a7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-06 15:16:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.881: INFO: Pod "nginx-deployment-55fb7cb77f-v59zz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v59zz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-v59zz,UID:efb96e08-6550-4f71-8b6f-1abe82543a04,ResourceVersion:23336295,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3a907 0xc001e3a908}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3a9b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3aa40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.881: INFO: Pod "nginx-deployment-55fb7cb77f-x6php" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-x6php,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-x6php,UID:883e226f-1424-40ac-9b4a-e1285978040c,ResourceVersion:23336314,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3ab27 0xc001e3ab28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3ac10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3ac60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.882: INFO: Pod "nginx-deployment-55fb7cb77f-zrm76" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zrm76,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-55fb7cb77f-zrm76,UID:7f9961fc-9794-4486-b843-f7e109f41660,ResourceVersion:23336245,Generation:0,CreationTimestamp:2020-02-06 15:16:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 45bf6c96-de5f-4a22-8ee6-a5062d93e1ff 0xc001e3ad37 0xc001e3ad38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3adb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3add0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.882: INFO: Pod "nginx-deployment-7b8c6f4498-2dw64" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2dw64,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-2dw64,UID:c90393f2-7de7-426e-b1fd-212b55d2d889,ResourceVersion:23336188,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3afb7 0xc001e3afb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:33 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a05e16c4d4ff6f27126e1cc93b32171b5c115078c81a8dda8f1cf5bbb136b9ed}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.882: INFO: Pod "nginx-deployment-7b8c6f4498-58gf4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-58gf4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-58gf4,UID:ca9d1ed6-34fc-48b7-abbb-d7c1a5c99f32,ResourceVersion:23336320,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3b1c7 0xc001e3b1c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.883: INFO: Pod "nginx-deployment-7b8c6f4498-6q447" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6q447,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-6q447,UID:d616fa64-dde7-4e7c-b487-86b35876c425,ResourceVersion:23336350,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3b2d7 0xc001e3b2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-06 15:16:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.883: INFO: Pod "nginx-deployment-7b8c6f4498-c4xpk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c4xpk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-c4xpk,UID:c16378fb-ff64-4b0a-9d04-58519eb28d1f,ResourceVersion:23336318,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3b627 0xc001e3b628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.884: INFO: Pod "nginx-deployment-7b8c6f4498-fd7v2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fd7v2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-fd7v2,UID:0ab0d92c-d4b4-4087-8f94-9c1ff49809bb,ResourceVersion:23336197,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3b7c7 0xc001e3b7c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3b860} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3b8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0c3aa141abe1fdee9e316e37b3810de76940af2644d2eaef29459e29f28d434e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.884: INFO: Pod "nginx-deployment-7b8c6f4498-fkq86" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fkq86,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-fkq86,UID:32f8ed90-b229-4886-8b3e-f7bdd0117cbb,ResourceVersion:23336165,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3ba47 0xc001e3ba48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3bb00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3bb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://bbd34575b5688d24570da207e0421063c0a580a5cf592eb42251aeda1125b377}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.885: INFO: Pod "nginx-deployment-7b8c6f4498-g64pj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g64pj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-g64pj,UID:e76b4a54-d3a5-4ba6-b107-22a64801a12f,ResourceVersion:23336327,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3bbf7 0xc001e3bbf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3bc70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3bc90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.885: INFO: Pod "nginx-deployment-7b8c6f4498-g7kr6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g7kr6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-g7kr6,UID:56d32f3e-231e-4872-b377-8037e6a38d3b,ResourceVersion:23336159,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc001e3bdf7 0xc001e3bdf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e3bfa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e3bfc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:30 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6c2dcf904b8d8cafc143e40f293bc2a7e9be288c8c114499a5a0226147d1b07f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.885: INFO: Pod "nginx-deployment-7b8c6f4498-hfvwb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hfvwb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-hfvwb,UID:2c60ba46-ddcf-405a-90ab-ad07fa024f28,ResourceVersion:23336338,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe097 0xc002ebe098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe110} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.886: INFO: Pod "nginx-deployment-7b8c6f4498-hkn2g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hkn2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-hkn2g,UID:b15e6c57-8482-4f92-94de-163f84aba6f6,ResourceVersion:23336175,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe1f7 0xc002ebe1f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cd95fa47718e4eaae2477e0756cb764c7b879ab5645855da1e85d6a3cde19b9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.886: INFO: Pod "nginx-deployment-7b8c6f4498-hpz74" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hpz74,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-hpz74,UID:717734b1-0981-4f29-85b2-0d7f77cae8c2,ResourceVersion:23336329,Generation:0,CreationTimestamp:2020-02-06 15:16:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe367 0xc002ebe368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe3d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe3f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-06 15:16:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.886: INFO: Pod "nginx-deployment-7b8c6f4498-j6n9q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j6n9q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-j6n9q,UID:30fc6eff-e908-4e36-87a0-5e72f57893b4,ResourceVersion:23336203,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe4b7 0xc002ebe4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe520} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:09 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:37 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:37 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-06 15:16:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:35 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ce1f05c479634b17bd8b51d2a228ddbdd010717bb08a47e55283076c3fe16065}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.887: INFO: Pod "nginx-deployment-7b8c6f4498-js72z" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-js72z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-js72z,UID:2b0bda6c-741d-4f97-bd1d-ef08c17a50d6,ResourceVersion:23336179,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe617 0xc002ebe618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe690} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe6b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:07 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-06 15:16:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:29 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0b7934908554961100eda3a3d71494a794ca0da209ae6f079ce52bfc21cfe2e2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.887: INFO: Pod "nginx-deployment-7b8c6f4498-ntrjf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ntrjf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-ntrjf,UID:a2fd4c0c-74b9-4e4f-aeed-1f97d06c5cb6,ResourceVersion:23336297,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe787 0xc002ebe788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe7f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.887: INFO: Pod "nginx-deployment-7b8c6f4498-pp92x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pp92x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-pp92x,UID:9c37c6b2-dd5f-4082-8e75-cca110dbb767,ResourceVersion:23336319,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe897 0xc002ebe898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebe900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebe920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.887: INFO: Pod "nginx-deployment-7b8c6f4498-pppfg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pppfg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-pppfg,UID:d194b9b6-5d09-4c5c-82eb-5bb0c41f6140,ResourceVersion:23336317,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebe9a7 0xc002ebe9a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebea20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebea40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.888: INFO: Pod "nginx-deployment-7b8c6f4498-pwwq2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pwwq2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-pwwq2,UID:cdacd6b9-ea07-4935-8578-21b94d8ec431,ResourceVersion:23336322,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebeac7 0xc002ebeac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebeb40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebeb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:48 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.888: INFO: Pod "nginx-deployment-7b8c6f4498-tnw4j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tnw4j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-tnw4j,UID:fecfa67e-6d3d-4c10-aa92-ff409a42d1a9,ResourceVersion:23336293,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebebe7 0xc002ebebe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebec50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebec70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.888: INFO: Pod "nginx-deployment-7b8c6f4498-vjfhr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vjfhr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-vjfhr,UID:18755c9e-b9b2-4e5d-b6d8-84fc34671710,ResourceVersion:23336347,Generation:0,CreationTimestamp:2020-02-06 15:16:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebed07 0xc002ebed08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebed80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebeda0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:47 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-06 15:16:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  6 15:16:55.888: INFO: Pod "nginx-deployment-7b8c6f4498-z4ksv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z4ksv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6853,SelfLink:/api/v1/namespaces/deployment-6853/pods/nginx-deployment-7b8c6f4498-z4ksv,UID:3dad9775-bab9-4b9e-ae94-58dbc4b91427,ResourceVersion:23336171,Generation:0,CreationTimestamp:2020-02-06 15:16:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 d58a08aa-02db-4380-a9a3-82c7c9c3ea73 0xc002ebee67 0xc002ebee68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rj29s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rj29s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rj29s true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002ebeee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002ebef00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-06 15:16:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-06 15:16:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-06 15:16:31 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://69e669d54438500ea5cf7190a37959e63c441b2a8f693557a94a736c8793a5db}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:16:55.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6853" for this suite.
Feb  6 15:17:51.421: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:17:51.678: INFO: namespace deployment-6853 deletion completed in 53.454997883s

• [SLOW TEST:103.980 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:17:51.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  6 15:17:51.827: INFO: Waiting up to 5m0s for pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7" in namespace "emptydir-7775" to be "success or failure"
Feb  6 15:17:51.832: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.3412ms
Feb  6 15:17:53.847: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020466858s
Feb  6 15:17:55.857: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030049712s
Feb  6 15:17:57.868: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041791194s
Feb  6 15:17:59.875: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048146492s
Feb  6 15:18:01.884: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.057647642s
STEP: Saw pod success
Feb  6 15:18:01.884: INFO: Pod "pod-600f97e6-6756-4222-bdfb-5c5650963cc7" satisfied condition "success or failure"
Feb  6 15:18:01.889: INFO: Trying to get logs from node iruya-node pod pod-600f97e6-6756-4222-bdfb-5c5650963cc7 container test-container: 
STEP: delete the pod
Feb  6 15:18:01.951: INFO: Waiting for pod pod-600f97e6-6756-4222-bdfb-5c5650963cc7 to disappear
Feb  6 15:18:01.957: INFO: Pod pod-600f97e6-6756-4222-bdfb-5c5650963cc7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:18:01.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7775" for this suite.
Feb  6 15:18:08.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:18:08.092: INFO: namespace emptydir-7775 deletion completed in 6.106958983s

• [SLOW TEST:16.414 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:18:08.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-rbjm
STEP: Creating a pod to test atomic-volume-subpath
Feb  6 15:18:08.262: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rbjm" in namespace "subpath-9119" to be "success or failure"
Feb  6 15:18:08.314: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Pending", Reason="", readiness=false. Elapsed: 51.581593ms
Feb  6 15:18:10.320: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058049314s
Feb  6 15:18:12.392: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130035864s
Feb  6 15:18:14.401: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138474581s
Feb  6 15:18:16.409: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 8.146619522s
Feb  6 15:18:18.420: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 10.157500873s
Feb  6 15:18:20.429: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 12.166088097s
Feb  6 15:18:22.447: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 14.184986644s
Feb  6 15:18:24.462: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 16.199980983s
Feb  6 15:18:26.482: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 18.219493199s
Feb  6 15:18:28.496: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 20.23308497s
Feb  6 15:18:30.504: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 22.241614978s
Feb  6 15:18:32.516: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 24.25314214s
Feb  6 15:18:34.534: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Running", Reason="", readiness=true. Elapsed: 26.271502775s
Feb  6 15:18:36.594: INFO: Pod "pod-subpath-test-downwardapi-rbjm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.331786983s
STEP: Saw pod success
Feb  6 15:18:36.594: INFO: Pod "pod-subpath-test-downwardapi-rbjm" satisfied condition "success or failure"
Feb  6 15:18:36.600: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-rbjm container test-container-subpath-downwardapi-rbjm: 
STEP: delete the pod
Feb  6 15:18:36.708: INFO: Waiting for pod pod-subpath-test-downwardapi-rbjm to disappear
Feb  6 15:18:36.725: INFO: Pod pod-subpath-test-downwardapi-rbjm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rbjm
Feb  6 15:18:36.725: INFO: Deleting pod "pod-subpath-test-downwardapi-rbjm" in namespace "subpath-9119"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:18:36.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9119" for this suite.
Feb  6 15:18:42.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:18:42.868: INFO: namespace subpath-9119 deletion completed in 6.132641857s

• [SLOW TEST:34.775 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:18:42.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:18:43.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8822" for this suite.
Feb  6 15:18:49.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:18:49.309: INFO: namespace kubelet-test-8822 deletion completed in 6.156968519s

• [SLOW TEST:6.441 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:18:49.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  6 15:18:49.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e" in namespace "downward-api-9253" to be "success or failure"
Feb  6 15:18:49.439: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659844ms
Feb  6 15:18:51.447: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012684999s
Feb  6 15:18:54.762: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.327709479s
Feb  6 15:18:56.867: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.433047566s
Feb  6 15:18:58.876: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.441300827s
Feb  6 15:19:00.884: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.449827784s
Feb  6 15:19:02.891: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.457159595s
STEP: Saw pod success
Feb  6 15:19:02.892: INFO: Pod "downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e" satisfied condition "success or failure"
Feb  6 15:19:02.894: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e container client-container: 
STEP: delete the pod
Feb  6 15:19:02.930: INFO: Waiting for pod downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e to disappear
Feb  6 15:19:03.003: INFO: Pod downwardapi-volume-808a8e97-bff0-4d72-bffd-dbf077bd560e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:19:03.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9253" for this suite.
Feb  6 15:19:09.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:19:09.159: INFO: namespace downward-api-9253 deletion completed in 6.152292391s

• [SLOW TEST:19.849 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  6 15:19:09.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-549
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  6 15:19:09.286: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  6 15:19:49.497: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-549 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 15:19:49.497: INFO: >>> kubeConfig: /root/.kube/config
I0206 15:19:49.602099       8 log.go:172] (0xc001260790) (0xc002495180) Create stream
I0206 15:19:49.602155       8 log.go:172] (0xc001260790) (0xc002495180) Stream added, broadcasting: 1
I0206 15:19:49.611215       8 log.go:172] (0xc001260790) Reply frame received for 1
I0206 15:19:49.611265       8 log.go:172] (0xc001260790) (0xc002a0e460) Create stream
I0206 15:19:49.611282       8 log.go:172] (0xc001260790) (0xc002a0e460) Stream added, broadcasting: 3
I0206 15:19:49.613524       8 log.go:172] (0xc001260790) Reply frame received for 3
I0206 15:19:49.613555       8 log.go:172] (0xc001260790) (0xc002495220) Create stream
I0206 15:19:49.613565       8 log.go:172] (0xc001260790) (0xc002495220) Stream added, broadcasting: 5
I0206 15:19:49.615519       8 log.go:172] (0xc001260790) Reply frame received for 5
I0206 15:19:50.798465       8 log.go:172] (0xc001260790) Data frame received for 3
I0206 15:19:50.798503       8 log.go:172] (0xc002a0e460) (3) Data frame handling
I0206 15:19:50.798523       8 log.go:172] (0xc002a0e460) (3) Data frame sent
I0206 15:19:50.958790       8 log.go:172] (0xc001260790) Data frame received for 1
I0206 15:19:50.958835       8 log.go:172] (0xc002495180) (1) Data frame handling
I0206 15:19:50.958868       8 log.go:172] (0xc002495180) (1) Data frame sent
I0206 15:19:50.958895       8 log.go:172] (0xc001260790) (0xc002495180) Stream removed, broadcasting: 1
I0206 15:19:50.959067       8 log.go:172] (0xc001260790) (0xc002a0e460) Stream removed, broadcasting: 3
I0206 15:19:50.959171       8 log.go:172] (0xc001260790) (0xc002495220) Stream removed, broadcasting: 5
I0206 15:19:50.959206       8 log.go:172] (0xc001260790) (0xc002495180) Stream removed, broadcasting: 1
I0206 15:19:50.959218       8 log.go:172] (0xc001260790) (0xc002a0e460) Stream removed, broadcasting: 3
I0206 15:19:50.959242       8 log.go:172] (0xc001260790) (0xc002495220) Stream removed, broadcasting: 5
Feb  6 15:19:50.959: INFO: Found all expected endpoints: [netserver-0]
I0206 15:19:50.960139       8 log.go:172] (0xc001260790) Go away received
Feb  6 15:19:50.966: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-549 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  6 15:19:50.966: INFO: >>> kubeConfig: /root/.kube/config
I0206 15:19:51.029082       8 log.go:172] (0xc00172ec60) (0xc0022e2dc0) Create stream
I0206 15:19:51.029309       8 log.go:172] (0xc00172ec60) (0xc0022e2dc0) Stream added, broadcasting: 1
I0206 15:19:51.042991       8 log.go:172] (0xc00172ec60) Reply frame received for 1
I0206 15:19:51.043046       8 log.go:172] (0xc00172ec60) (0xc001952a00) Create stream
I0206 15:19:51.043058       8 log.go:172] (0xc00172ec60) (0xc001952a00) Stream added, broadcasting: 3
I0206 15:19:51.045816       8 log.go:172] (0xc00172ec60) Reply frame received for 3
I0206 15:19:51.045852       8 log.go:172] (0xc00172ec60) (0xc002a0e500) Create stream
I0206 15:19:51.045862       8 log.go:172] (0xc00172ec60) (0xc002a0e500) Stream added, broadcasting: 5
I0206 15:19:51.047690       8 log.go:172] (0xc00172ec60) Reply frame received for 5
I0206 15:19:52.204916       8 log.go:172] (0xc00172ec60) Data frame received for 3
I0206 15:19:52.205015       8 log.go:172] (0xc001952a00) (3) Data frame handling
I0206 15:19:52.205041       8 log.go:172] (0xc001952a00) (3) Data frame sent
I0206 15:19:52.414723       8 log.go:172] (0xc00172ec60) Data frame received for 1
I0206 15:19:52.414761       8 log.go:172] (0xc0022e2dc0) (1) Data frame handling
I0206 15:19:52.414779       8 log.go:172] (0xc0022e2dc0) (1) Data frame sent
I0206 15:19:52.414979       8 log.go:172] (0xc00172ec60) (0xc0022e2dc0) Stream removed, broadcasting: 1
I0206 15:19:52.415228       8 log.go:172] (0xc00172ec60) (0xc001952a00) Stream removed, broadcasting: 3
I0206 15:19:52.415253       8 log.go:172] (0xc00172ec60) (0xc002a0e500) Stream removed, broadcasting: 5
I0206 15:19:52.415267       8 log.go:172] (0xc00172ec60) Go away received
I0206 15:19:52.415349       8 log.go:172] (0xc00172ec60) (0xc0022e2dc0) Stream removed, broadcasting: 1
I0206 15:19:52.415375       8 log.go:172] (0xc00172ec60) (0xc001952a00) Stream removed, broadcasting: 3
I0206 15:19:52.415389       8 log.go:172] (0xc00172ec60) (0xc002a0e500) Stream removed, broadcasting: 5
Feb  6 15:19:52.415: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 15:19:52.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-549" for this suite.
Feb  6 15:20:20.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  6 15:20:20.749: INFO: namespace pod-network-test-549 deletion completed in 28.316918497s

• [SLOW TEST:71.589 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSFeb  6 15:20:20.750: INFO: Running AfterSuite actions on all nodes
Feb  6 15:20:20.750: INFO: Running AfterSuite actions on node 1
Feb  6 15:20:20.750: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8656.198 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8656.54s)
FAIL